Tech News

RE: WIRED 2021: Timnit Gebru Says Artificial Intelligence Should Be Delayed

Intelligent technical researchers facing a response problem: How do you try to ensure that elections are responsible for the election manufacturer is not a trustworthy person, but it is algorithm? At present, only a handful of individuals and organizations have the power and resources to make decisions.

Organizations depend KWA accepting credit or making a judgment for the defendant. But the foundation on which these ingenious systems are built can be biased. Discrimination from the media, from the developer, and from the powerful companies, can turn snow into unintended consequences. This was exactly what AI researcher Timnit Gebru warned on the RE: WIRED article Tuesday.

“There are companies that are researching [to assess] another opportunity to be tried again, ”said Gebru. “That was dangerous for me.”

Gebru was a star engineer at Google who specialized in AI ethics. He led a group responsible for defending itself against algorithmic discrimination, bigotry, and other prejudices. Gebru also founded a nonprofit Black organization in AI, which seeks to improve the inclusion, visibility, and health of black people in his field.

Last year, Google pressured her to leave. But he did not completely abandon his struggle to prevent unexpected damage from the machine learning machine.

On Tuesday, Gebru spoke with WIRED executive secretary Tom Simonite about the implications for AI research, staff security, and the vision of his independent AI Ethics and Accountability organization. The bottom line: AI has to come down.

“We didn’t have time to think about how it should be built, because we always put out fires,” he said.

As an Ethiopian refugee who attends a public school in rural Boston, Gebru soon encountered racial tensions in America. Stories about racism in the past, but did not agree with what they saw, Gebru tells Simonite earlier this year. He has found similar misunderstandings over and over again in his technical work.

Gebru began his career in hardware. But he changed his approach after seeing the barriers to diversity, and began to doubt that more AI research had the potential to bring problems to existing groups.

“The connection between this led me to another, where I try to understand, and try to minimize the problems that people face in AI,” he said.

For two years, Gebru led the Google Ethical AI team with computer scientist Margaret Mitchell. The team developed tools to prevent AI problems from Google’s marketing teams. However, in time Gebru and Mitchell realized that they were not associating with the e-mail.

In June 2020, a version of the GPT-3 language was released, demonstrating the ability to create cohesive prose in some cases. But the Gebru group was worried because of the excitement that was there.

“Let’s make big, big, and big languages,” said Gebru, recalling popular ideas. “We had to be like, ‘Let’s stop and calm down for a few minutes to think about the good and the bad, as well as other ways to do this.’

His team helped draft a paper on the effects of languages, entitled “At the Danger of the Stochastic Parrots: Can Languages ​​Be Extreme?”

Some on Google were not happy. Gebru was asked to remove the paper or remove the Google employee names. He replied with a question: Who he begged for such cruelty, and why? No side was shaken. Gebru obtained from his direct report that he “resigned.”


Source link

Related Articles

Leave a Reply

Back to top button