Tech News

AI Writes More Fishing Gospels Than People In A New Test

Language correction it continues to find its way in unexpected corners. This time, it is fraudulent emails. In a smaller study, researchers found that they could use the advanced GPT-3 communication technology, along with other AI-as-a-service platforms, to significantly reduce barrier to participation in highly competitive activities.

Researchers have long debated whether it would be better for fraudsters to train learning machines that could produce compelling messages. More and more fraudulent messages are simple and straightforward, because, and they are very useful. The most advanced and custom-made messages are the “most advanced” to write, however. That’s when NLP can come in handy.

At the Black Hat and Defcon security rallies in Las Vegas this week, a team from Singapore’s Government Technology Agency issued a recent test in which they sent fraudulent emails to themselves and others created by the AI-as-a-service platform to 200 colleagues. Both messages contained links that were not really bad but only explained the research findings of the researchers. They were surprised to find that more people clicked the links found in AI-generated messages than those written by humans — with limited margins.

“Researchers have said that AI requires more expertise. It takes millions of dollars to train the best brand,” says Eugene Lim, a specialist at Government Technology Agency. “But once you install it on AI-as-service it costs a few cents and it’s easy to use – just texting, texting. You don’t have to run the code, you just respond quickly and it will give you results. As a result it reduces the barrier to entry for a larger audience and increases the potential for shooting. Suddenly any email on a large scale can be modified by anyone to receive it. ”

The researchers used the OpenAI GPT-3 platform in conjunction with other AI-as-a-service features that focus on human analysis to generate negative emails about social and peer-to-peer behavior. Machine learning that focuses on personality analysis predicts what a person will get and his or her thoughts based on what he or she does. Using the results through a number of applications, the researchers were able to create the pipelines they designed and edit emails before sending them. They say the results sound like “amazing people” and that the platforms only produce these amazing ones, such as citing Singapore’s law when they are advised to create a living environment for Singaporeans.

Although they were impressed with the message and how they repeatedly clicked on it from their peers against human inventions, the researchers realized that the experiment was a first step. The sample size was small and the target pool was similar in terms of function and area. In addition, all man-made and man-made AI-as-a-service communications were made by those who lived inside the offices instead of the outside attackers who were trying to hear the right voice from a distance.

“There are so many things to consider,” said Tan Kee Hock, an expert on Government Technology Agency.

However, these findings encouraged researchers to think critically about how AI-as-a service can help in shooting and driving campaigns forward. For example, OpenAI, long term he feared the possibility because misuse its useful or similar. The researchers found that they and other highly intelligent AI-as-a-service providers have clear systems, try to monitor their platforms for potential vulnerabilities, or attempt to identify users at a certain level.

“The misuse of language is an industry issue that we take seriously as part of our commitment to AI,” OpenAI WIRED said in a statement. “We provide access to GPT-3 through our API, and we review all applications of GPT-3 prior to development. provide evidence of first-time abuse, and we are still working to improve our security equipment. ”


Source link

Related Articles

Leave a Reply

Back to top button