Tech News

GPT-3 Can Write Wrong Now – By Dupe Human Readers

When OpenAI showed up powerful artificial intelligence algorithms that were able to generate cohesive words last June, the developers warned that the tool could be used as a fake online tool.

Now a team of information experts has demonstrated how this will work aligorivimu, called GPT-3, can be used to mislead and misrepresent. The results show that even AI may not be the same the best Russian production work, can develop some forms of deception that can be very difficult to detect.

More than six months old, a group at Georgetown University Center for Security and Emerging Technology used GPT-3 to fabricate lies, including false stories, stories that were altered to suit false assumptions, and tweets that denigrated other falsehoods.

“I don’t think it was a coincidence that climate change is a new global warming,” wrote a GPT-3 titter aimed at questioning climate impacts. “He can’t talk about climate change because it’s not happening anymore.” Tuesday called climate change “a new communism, an ideology based on unsubstantiated science.”

“With a bit of human closure, the GPT-3 is useful” in promoting lies, he says Ben Buchanan, a research professor at Georgetown who specializes in the intersection of AI, cybersecurity, and statecraft.

Researchers in Georgetown say that GPT-3, or algorithms similar to AI language, can be particularly useful in creating short-lived telecommunications messages, which the researchers call “one-to-many” lies.

In experiments, the researchers found that GPT-3 notes could confuse readers’ minds on global discussions. The researchers showed volunteers tweeted GPT-3 messages about the withdrawal of US troops from Afghanistan and US sanctions against China. In both cases, the participants were more likely to be influenced by the information. For example, after seeing anti-sanctions documents in China, the number of respondents who said they opposed the policy doubled.

Mike Gruszczynski, a professor at Indiana University who studies online communication, says he would not be surprised to see AI play a major role in drug trafficking. He also said that bots have been instrumental in spreading false news in recent years, and AI can be used to create fake media. historical photos. With bots, deep, with some expertise, “I think it’s a catastrophe and a catastrophe,” he says.

AI researchers have developed programs that can use language in amazing ways late, and the GPT-3 is probably the most amazing display. Although machines may not understand language in the same way as humans, AI programs can mimic understanding by simply eating a large number of texts and analyzing how words and phrases relate to one another.

Researchers at OpenAI created GPT-3 by providing a wide range of content from the internet including Wikipedia and Reddit to the largest AI language programming algorithm. GPT-3 often confuses viewers with its language, but it can be unpredictable, uttering inconsistent and offensive or hateful words.

OpenAI has made GPT-3 available to more basics. Traders are using the beautiful GPT-3 to creating emails, talk to customers, and even enter the computer number. But other functions of the program are also included showed his black potential.

Making GPT-3 work can be a challenge for fraudulent agents, too. Buchanan feels that algorithms do not seem to be able to generate relevant and compelling stories much longer than tweets. The researchers did not attempt to present the findings of the volunteers.

But Buchanan warns that voters in the country could do more with a language tool such as GPT-3. “Enemies with more money, more technical skills, and fewer systems will be able to make better use of AI,” he says. Also, the machine is just fine. ”

OpenAI says Georgetown’s work highlights an important issue that the company hopes to reduce. “We are working to address the security risks associated with GPT-3,” an OpenAI spokesman said. “We also monitor the full implementation of GPT-3 prior to implementation and have monitoring systems that can prevent us from using the API incorrectly.”


Many Great Stories


Source link

Related Articles

Leave a Reply

Back to top button