35 innovators under 35: AI and robots
But GPT-3 suffers from several problems that researchers are working to address. It’s often inconsistent — you can find contradictory answers to the same question. Second, GPT-3 is prone to “hallucinations”: when asked who the president of the United States was in 1492, it will happily conjure up an answer. Third, GPT-3 is an expensive model to train and expensive to run. Fourth, GPT-3 is opaque — it’s difficult to understand why it drew a particular conclusion. Finally, since GPT-3 parrots the contents of its training data, which is drawn from the web, it often spews out toxic content, including sexism, racism, xenophobia, and more. In essence, GPT-3 cannot be trusted.
Despite these challenges, researchers are investigating multi-modal versions of GPT-3 (such as DALL-E2), which create realistic images from natural-language requests. AI developers are also considering how to use these insights in robots that interact with the physical world. And AI is increasingly being applied to biology, chemistry, and other scientific disciplines to glean insights from the massive data and complexities in those fields.
The bulk of the rapid progress today is in this data-centric AI, and the work of this year’s 35 Innovators Under 35 winners is no exception. While data-centric AI is powerful, it has key limitations: the systems are still designed and framed by humans. A few years ago, I wrote an article for MIT Technology Review called “How to know if artificial intelligence is about to destroy civilization.” I argued that successfully formulating problems remains a distinctly human capability. Pablo Picasso famously said, “Computers are useless. They only give you answers. ”
We continue to anticipate the distant day when AI systems can formulate good questions — and shed more light on the fundamental scientific challenge of understanding and constructing human-level intelligence.
Oren Etzioni is CEO of the Allen Institute for AI and a judge for this year’s 35 Innovators competition.