Tech News

Efforts to Make Records on AI Low Racial Discrimination and Risks

In another test, Xudong Shen, a PhD student at the National University of Singapore, voted for racial profiling based on gender bias or whether they perceived themselves to be anti-transgender, transgender, or nonbinary. He found that large AI programs tend to do a lot of contradictions. Shen argues that the makers of major language groups need to correct these errors. OpenAI researchers have also found that language groups become more toxic as they grow older; they say that they do not understand why this is so.

Texts made up of large vernacular languages ​​are very close to a language that looks or sounds like it came from a human being, yet fails to understand the elements that need to be considered in what almost everyone understands. In other words, as some researchers have suggested, AI is a very interesting device, capable of assuring AI researchers and others that the machine understands the words it produces.

UC Berkeley professor of psychology Alison Gopnik studies how children and young people learn to use these concepts online. He said children are very well educated, and the way children learn a language depends largely on their knowledge and connections with the world around them. Similarly, the major language groups have no connection to the world, which makes their content unrealistic.

“The definition of a cow explosion speaks volumes and sounds plausible, but there is no wisdom behind it,” Gopnik said.

Yejin Choi, an associate professor at the University of Washington and leader of a team studying intellectual property at the Allen Institute for AI, has put GPT-3 into a series of tests and tests to document how to correct errors. Sometimes it repeats itself. Sometimes that except in the form of foul language even when it is initially abusive or hurtful.

To teach more AI around the world, Choi and a team of researchers developed PIGLeT, AI-trained experimental AI to understand the physical things people are learning growing up, such as a negative perception of a hot stove. This study led to a type of language that was much smaller than others in intellectual property. These results, he said, suggest that the only scale that is successful and researchers should consider other methods of teaching colors. Its goal: “Can we develop a machine learning system that can learn a lot that is not in line with the way the world works?”

Choi is also working on ways to reduce the toxicity of language. Earlier this month, he and his colleagues launched algorithm who learn from the frustrating texts, similar to the way Facebook AI Research did; they claim to reduce toxicity rather than a number of existing methods. Major forms of language can be toxic to humans, he says. “That’s the language that exists.”

Similarly, some researchers have found that attempts to improve performance and eliminate bias in the model may be detrimental to the disadvantaged. In the paper published in April, researchers at UC Berkeley and the University of Washington found that blacks, Muslims, and people who are known to be LGBT are less likely to be affected.

The authors claim that the problem stems from, in humans, people misinterpreting data as to whether language is deadly or not. This leads to bias against people who use the language differently from whites. The authors say that this could lead to depression and psychological trauma, as well as pressure to change codes. OpenAI researchers have not responded to this issue in their recent paper.

Jesse Dodge, a research scientist at the Allen Institute for AI, reached a similar conclusion. He looked at efforts to reduce homosexuality and to remove from the textbooks of the main language any words that contained the words “gay” or “lesbian.” They found that such language filtering efforts could lead to data deletion of people with this condition, making language languages ​​insufficient to deal with the content of individuals or groups.

Dodge argues that the best way to deal with prejudice and inequality is to improve the language teaching resources instead of trying to eliminate biases. He also encourages them to write down the source of their teachings and identify the shortcomings that have been removed from the internet, which can represent people who can access the internet and have the time to create a web page or post comments. They also recommend writing down how filters are stored and avoiding the use of a blanket on a list of filters that have been removed from the internet.

Source link

Related Articles

Leave a Reply

Back to top button