Tech News

AI players sound more like humans than ever before – and are ready to hire

[ad_1]

The company’s blog post is dripping with interest in the ’90s US infomercial. WellSaid Labs outlines what customers can expect from “eight new audio recording artists!” Tobin is described as being “vigorous in power and wisdom.” Paige is “prepared and articulate.” These are “polished, self-reliant, and professional.”

Each is set up by a real voice artist, whose form (and permission) is stored using AI. Companies can now allow these terms to say whatever they want. It simply feeds certain words into the sound engine, and it produces naturally audible voice.

The Labs of WellSaid, a Seattle founder who came out of the nonprofit research Allen Institute of Artificial Intelligence, is the latest company to offer AI voice to customers. At the moment, it is based on the video game industry of training companies. Some sources make words digital contributors, mobile operators, and even game directors.

Not so long ago, such profound words had a reputation as used he is stealing and online fraud. But their improvement since then has attracted the attention of many companies. The recent discovery of in-depth studies has made it possible to copy some of the human intellectual geniuses. These words breathe and breathe in all the right places. They can change the style or the way they feel. You can see the hypocrisy if he speaks too long, but you shorten the listener’s voice, some of which may not be noticed by people.

AI is cheap, dangerous, and easy to work with. Unlike human voice recorders, voice producers are able to re-edit their recordings in real time, opening up new advertising opportunities.

But the rise of malicious propaganda does not mean the result. Human voice actors, in particular, have been left wondering what this means for their lives.

How to hide the word

Production terms have been circulating for some time. But the past, including the original words A girl named Siri and Alexa, he simply tied up the words in order to meet some of the challenges. Making them feel natural was a daunting task.

In-depth study changed this. Speakers do not need to control the movement, pronunciation, or pronunciation of the words they produce. Instead, they are able to feed it for hours on end and change the way they make their own.

“If I’m Pizza Hut, I don’t sound like Domino, and I don’t sound like Pope John.”

Rupal Patel, founder and CEO of VocaliD

Over the years, researchers have used that basic concept in the production of high-quality voice engines. One WellSaid lab built-in, for example, uses two types of advanced learning tools. The first predicts, from the wording of the word, the larger strokes of the speaker’s voice – the combination of tone, tone, and timbre. The second one is filled with details, including pauses and punctuation.

Creating a customized voice system takes more than just the push of a button, however. One of the factors that make human voices human is the inconsistency, explanation, and the ability to give the same lines in different ways, depending on the context.

Dealing with these nuances involves finding the right music professionals to provide the right training and deepen the learning styles in depth. WellSaid says the process requires an hour or two of words and several weeks of work to create a clear image.

The word AI has grown exponentially among marketers who are looking to make it sound different in millions of conversations with customers. With the availability of smart speakers today, as well as the increasing number of consumer and digital engineers integrated with automotive and electronic devices, brands may need to produce more than a hundred audio recordings a month. But they no longer want to use the natural vocabulary provided by professional speech technology — which exacerbated the epidemic as more and more customers flocked to the markets to compete with almost all companies.

“If I were a Pizza Hut, I wouldn’t sound like Domino, and I wouldn’t sound like Pope John,” said Rupal Patel, a professor at Northeastern University and founder and CEO of VocaliD, who promises to build a cultural identity similar to the company’s identity. “These species have thought of their species. They think of their letters. Now they begin to think about how their voices sounded again. ”

While companies have to list players in a variety of markets – from the northeast against the US, or France against Mexico – some AI companies can change voice or change the language of a single word in a variety of ways. This opens up the possibility of changing ads on advertising platforms depending on the listener, not only changing the format of the word but also the spoken word. Alcohol advertising can tell audiences to stop by a variety of venues depending on whether they are playing in New York or Toronto, for example. Resemble.ai, which develops advertising and smart content support, says it is already working with clients to set up such ads on Spotify and Pandora.

The gaming and entertainment industries are also seeing the benefits. Sonantic, a company that uses humorous voices that can laugh or cry or whisper and shout, works with video game makers and training environments to be able to deliver voices to the characters. Many of its clients use pre-made vocabulary and adapt it to the audience to create the final product. But Sonantic says a few are starting to use them at this point, perhaps for those with smaller lines. Resemble.ai and others have also worked with videos and TV shows to distribute theater when words are distorted or mispronounced.

[ad_2]

Source link

Related Articles

Leave a Reply

Back to top button