Tech News

Why Computers Do Not Need to Compare Human Wisdom

Speech and language it is essential for human intelligence, communication, and awareness. Understanding the natural language is often seen as important KWA This problem — when solved, can make machines closer to human intelligence.

In 2019, Microsoft and Alibaba announced that it had made additions to a Google a skill that strikes people in the field of Natural Language processing (NLP) called reading comprehension. The subject was relatively unknown, but I could see that it was quite an achievement, as I recalled four years ago.

In 2015, researchers from Microsoft and Google developed a simulation model developed by Geoff Hinton’s and Yann Lecun. beating people by recognizing images. I predicted at that time that computer programming would grow, and my company made money in about a dozen companies that build computer programming or commercials. Today, these items are used in retail, manufacturing, manufacturing, health care, and transportation industries. That amount is now over $ 20 billion.

So in 2019, when I see the same human potential eclipse in the NLP, I hope that the NLP algorithms will improve the accuracy of word processing and machine translation, which will one day empower the “global translator” as reflected in Star Trek. NLP will also develop new programs, such as search engine optimization engine (Google’s Larry Page vision) and integration of visual effects (creating a children’s ad for today’s advertisers). This can be applied to finance, health care, marketing, and consumer spending. Since then, we have been busy investing in NLP companies. I hope we can see greater results from NLP than computer vision.

What is the success of this NLP? It is a technique called learning to take care of oneself. Previous NLP algorithms are required to collect information and make it difficult for any domain (such as Amazon Alexa, or a customer customer platform), which is expensive and easy to make mistakes. But self-study courses really work everything global data, creating a massive model that could contain several trillion segments.

This great model is taught without human supervision – self-taught AI by recognizing the structure of the language itself. Then, once you have the data for a particular area, you can completely change the giant and use it for things like machine translation, answering questions, and natural dialogue. Proper preparation will take the place of a larger model, and requires very little change. This is similar to how people learn a language first, and then, learn knowledge or special education.

Since 2019, we have seen major NLP species grow rapidly (about 10 times a year), with similar improvement. We’ve also seen some amazing-like demonstrations GPT-3, which can be written in the style of anyone (like Dr. Seuss-style), or Google Lambda, which speaks naturally in human terms, or a Chinese origin called Langboat that creates a business bond unlike any other person.

Are we any closer to solving the problem of the vernacular? The skeptics claim that these algorithms are simply memorizing developments around the world, and that they recall small portions intelligently, but do not understand them and have no real understanding. Central to human intelligence is the ability to think, plan, and create.


Source link

Related Articles

Leave a Reply

Back to top button