Tech News

Within the battle to rescue AI from the control of Big Tech

Among the richest and most powerful companies in the world, Google, Facebook, Amazon, Microsoft, and Apple have made AI a big part of their business. It’s been the last decade, especially for so-called AI systems much study, allowed them to monitor user perceptions; advise news, information, and marketing to them; best of all, destroy them with ads. Last year Google’s advertising revenue generated more than $ 140 billion. Facebook made $ 84 billion.

The companies have invested heavily in technology that has brought them so much wealth. Google’s parent company, Alphabet, has acquired a London-based AI-based lab DeepMind for $ 600 million in 2014 and spends hundreds of millions a year to support his research. Microsoft has signed a $ 1 billion deal with OpenAI in 2019 free trade on its systems.

At the same time, technical giants have become major investors in AI research at the university, which has a profound impact on what science sets. Over the years, more and more scientists have shifted to using technology giants on a regular basis or connected. From 2018 to 2019, 58% of the most cited papers at the last two AI conferences had one co-author of a professional genius, compared to only 11% a decade earlier, according to researchers’ findings. Winner AI Network, a group that seeks to challenge electronic power in AI.

The problem is that AI’s corporate goals have focused too much on business skills, especially neglecting research that can help address issues such as economic inequality and climate change. On the contrary, it has only aggravated the problem. Self-employment has cut costs and has led to tedious tasks such as data cleaning and maintenance. The pressure to create larger colors has led to the explosive power of AI’s power. In-depth learning has also established a culture in which our knowledge is constantly being updated, often without permission, teaching things such as facial recognition. And the idea of ​​aid is widening the political divide, when the major language groups have failed to clean up the mess.

This is what Gebru is a growing group of like-minded professionals who want to change. Over the past five years, they have been striving to eliminate the most important things in the field instead of simply enriching modern companies, adding those who can participate in the production of technology. Their goal is not only to address the challenges posed by existing systems but to create new, appropriate and democratic AI.

“Greetings from Timnit”

In December 2015, Gebru sat down to write an open letter. Shortly before completing his PhD in Stanford, he attended the Neural Information Processing Systems conference, the largest AI research conference. Of the more than 3,700 researchers there, Gebru counted only five that were black.

Once small in a conference on higher education, NeurIPS (as it is already known) began to be the largest annual bonanza in AI. The richest companies in the world come to exhibit, hold high-profile parties, and record high-quality checks for the poorest people in Silicon Valley: skilled AI researchers.

That same year Elon Musk came to advertise his nonprofit business OpenAI. He, the former President of Y Combinator Sam Altman, and the founder of PayPal Peter Thiel had made $ 1 billion to put an end to what he believed to be a difficult task: the hope that one day all intelligent people could take over the world. Their answer: develop the best skills. Of the 14 counselors or members of the profession who were anointed, 11 were white.

RICARDO SANTOS | GOOD PHOTO

When Musk was a lion, Gebru was ashamed and tortured. At a party party, a group of drunken boys in Google Research T-shirts surrounded him and hugged him reluctantly, kissed him on the cheek, and posed for a photo.

Gebru wrote what he saw: the show, the cult similar to the cult of the famous AI, and most of all, the great integration. The boy’s club culture, he wrote, had already excluded talented women from abroad. It also leads all people to dangerous ideas of artists and how they can affect the world.

Google has already set up a computerized screening system that selects black people as gorillas, he said. And the growing number of unmanned drones was putting U.S. military on the road to dangerous self-defense weapons. But there was no mention of this in Musk’s grand plan to prevent AI from gaining the world’s future. “We don’t need to predict the future to see the complexities of AI,” Gebru wrote. “It’s already happening.”

Gebru has never published his show. But he realized that change was needed. On January 28, 2016, he sent an email with the title “Hello from Timnit” to five other Black AI researchers. “I’ve always been saddened by the lack of colors in AI,” he wrote. “But now I’ve seen 5 of you 🙂 and I thought it would be nice if we started black in the AI ​​group or didn’t know each other.”

The email encouraged the conversation. What were the black markers that illuminated their research? For Gebru, his work was made up of his identity; for others, it was not. But when they met they agreed: When AI plays a big role in a group, they need more black researchers. Otherwise, the field may undermine weak science — and its negative effects may be exacerbated.

Which is driven by value

As Black in AI had just begun to connect, AI was hitting its commercials. That same year, 2016, tech giants spent $ 20 to $ 30 billion on technology, according to the McKinsey Global Institute.

Heating and selling institutions, the segment twists. Thousands of researchers have begun to study AI, but they are keen to apply more in-depth learning methods, such as those that represent major types of languages. “As a young PhD student looking to get a job at a technology company, you know that technology companies are only about deep learning,” says Suresh Venkatasubramanian, a professor of computer science who now serves at the White House Office of Science and Technology Policy. “That’s why you change all your research to study in depth. Then the next PhD student comes looking around and says, ‘Everyone is learning in depth. I have to do the same.'”

But deep study is not the only option in the field. Prior to the explosion, there was another form of AI called symbolic reasoning. While in-depth learning is used extensively in the teaching of algorithms related to cognitive relationships, symbolic reasoning focuses on the development of knowledge and ideas according to human expertise.

Some researchers now believe that these methods should be included. An integrated approach can make AI more efficient in the use of data and power, as well as provide expert knowledge and thinking as well as the ability to innovate with new features. But companies are not interested in finding alternatives when the surest way to maximize their profits is to make bigger brands.


Source link

Related Articles

Leave a Reply

Back to top button