Tech News

The researcher says AI is not designed or intelligent

[ad_1]

Professional companies like it display artificial intelligence as an accurate and powerful tool for assistance. Kate Crawford says the stories are wrong. In his book Atlas and AI, visiting the lithium mine, a Amazon a museum, and a 19th-century archeological site depicting nature, human sweat, and the unscientific sciences that support other forms of technology. Crawford, professor at the University of Southern California and researcher at Microsoft, says many software and AI bugs need urgent redress.

Crawford recently discussed this with WIRED general secretary Tom Simonite. The revised notes follow.

WIREDI: Only a handful of people understand all of the technical wisdom. You are saying that some experts who use technology do not fully understand AI.

KATE CRAWFORD: It is described as a reliable and decision-making process, which we can cover in everything from educating children to choosing who will receive bail. But the name is deceptive: AI is neither creative nor intelligent.

AI is made from a wide range of natural resources, fuels, and human resources. And he is not wise in any human way. It cannot perceive things without further human study, and it contains a number of different ideas about how meaning is constructed. From the very beginning of AI back to 1956, we have made this huge mistake, the first sin in the field, to believe that ideas are computer-like and vice versa. We think that these things are the equivalent of human intelligence and that nothing can be further from the truth.

You take the story by showing how AI is made. Like many corporate processes it goes awry. Some machine learning systems are built on aggregated information, which can lead to problems such as facial recognition errors that occur to a few.

We need to look at the nose to make a deep sense of ingenuity. The seeds of this problem were sown in the 1980’s, when they began to be used to use confidential data, or to maintain confidentiality. It was simply a “synthetic”, which was also used for many things.

This turned out to be the idea of ​​publishing more data, but most of it is not unreasonable – it only brings news and politics. Comments from Reddit will be different from those in children’s books. The images from the mugshot books have a different history than the Oscars, but they are all used in the same way. This causes a lot of problems to go down. By 2021, there is still a whole business sector to figure out what kind of experience is happening in the teaching sectors, how it has been achieved, or ethical challenges.

You are following in the footsteps of the anti-traumatic program and questionable science that was adopted by the Department of Defense in the 1960s. recent comments of the more than 1,000 investigative papers, they did not find any evidence that a person could be properly informed in their face.

Awareness of emotion represents the idea that technology will ultimately answer the questions we have about human nature that are not questions at all. The highly controversial concept in the field of psychology has led to skipping machine learning because it is a simple concept that fits weapons. Writing people’s faces and associating these with simple, obvious, psychological works and learning on the machine — if you let go of culture and experiences and that you can change your appearance and feel several times a day.

This too is a form of communication: Because we have sophisticated equipment, people say we want to put it in schools and in the courts and to catch shoplifters. Recently companies are using the epidemic as a pretext to exploit the awareness of children in schools. This brings us back to the past, the belief that you recognize shape and personality in the face and skull.

Courtesy of Cath Muscat

You contributed to the recent growth of how AI can be challenging. But the field has been affected by people and money from technical expertise, which seeks to benefit from AI. Google recently fired two reputable AI researchers, Timnit Gebru and Margaret Mitchell. Does corporate participation reduce research into AI?

[ad_2]

Source link

Related Articles

Leave a Reply

Back to top button