Tech News

DeepMind says its new language could hit others 25 times its size

Named RETRO (for “Retrieval-Enhanced Transformer”), AI is similar to 25 neural network systems in terms of size, cutting time and cost required to teach larger models. The researchers say the database makes it easier to analyze what AI has learned, which can help eliminate bias and toxic language.

“Being able to look at objects on a fly instead of memorizing anything can often be helpful, as is the case with humans,” says Jack Rae of DeepMind, who directs the company’s research into metaphors in major languages.

Multilingualism creates words by predicting words that come in a sentence or conversation. A larger example, more knowledge of the world that can be learned from education, which makes its predictions better. The GPT-3 has 175 billion components – conditions in the neural network that store data and modify the way the model is studied. Microsoft’s Megatron language has 530 billion copies. But the big brands also take a lot of computer power to train them, making them inaccessible all but the richest corporations.

With RETRO, DeepMind has tried to reduce the cost of education without reducing the cost of AI learning. The researchers taught the model on a wide range of news articles, Wikipedia pages, books, and articles from GitHub, an online archive. The data is worded in 10 languages, including English, Spanish, German, French, Russian, Chinese, Swahili, and Urdu.

RETRO’s neural network has only 7 billion shares. But the system does this with a database that contains about 2 trillion paragraphs. Both databases and neural networks are taught simultaneously.

When RETRO makes a word, it uses a repository to look up and compare texts similar to what it writes, which makes its predictions accurate. Releasing some neural network memory into the database makes RETRO do more and less.

The concept is not uncommon, but this is the first time that the visual system has been developed to reflect the main language, and for the first time the results of this approach are shown to be consistent with the best AI language systems around.

The main thing is not always good

RETRO is based on two other studies released by DeepMind this week, one looking at how the size of the model affects its performance and looking at the potential harm caused by these AIs.

To learn how to grow, DeepMind created a great language called Gopher, with 280 billion sections. It overcame modern models in 82% of the complexities of more than 150 languages ​​used in testing. The researchers challenged RETRO and found that the 7-billion-parameter model is similar to Gopher’s work on most projects.

Ethics study is a detailed study of the well-known complexities found in major languages. These examples imitate bias, lies, and negative words such as hate speech from the stories and books that have been taught. As a result, they sometimes spit on hurtful words, unintentionally misrepresenting their experiences in teaching without knowing what they mean. Rae comments: “Even the best-known example is the biased viewpoint.


Source link

Related Articles

Leave a Reply

Back to top button