The realization that drives CaliberAI is that the universe is bound unlimited. While the restraint of AI is not close enough to be able to judge correctly on the true and false, it should be aware of the part of the language that can be ridiculed.
Carl Vogel, professor of numerology at Trinity College Dublin, has helped CaliberAI create its own brand. They have a way of using words that can be demeaning: They should refer to a person or group; show what they say is true; and use language or other ideas — such as thoughts about stealing, drunkenness, or other inappropriate language. If you feed the machine readable enough samples, it can detect the movement and correlations between the wrong words based on the company it maintains. This may allow them to make wise decisions in which, if applied to a particular group or individual, place the consequences in a polluted environment.
Apparently, there was no knowledge of the derogatory material that was under CaliberAI, because the publishers worked hard not to put those things in the world. As a result, the company built its own. Conor Brady began to take years from the media to create a list of derogatory remarks. “We thought of the worst things that could be said about anyone and we cut, cut, and mixed it up until we had a whole human problem,” he says. Then a team of translators, led by Alan Reid and Abby Reynolds, the team’s keynote speaker and spokeswoman, used the original text to create a larger version. They use the technology to teach AI to deliver a number of sentences, ranging from 0 (definitely not insulting) to 100 (call your attorney).
The results, meanwhile, are similar to spell-checks for defam. You can play with the show on the company’s website, which warns that “you may see false / evil things when you develop our methods of prediction.” I wrote “I believe John is a liar,” and the program spat out a 40-point chance, under the scandal. Then I tried “Everyone knows John is a liar,” and the program spat out about 80%, saying “Everyone knows” (his words), “John” (real person), and “liar” (wrong language). Obviously, this does not resolve the issue. In real life, my risk in court depends on whether I can prove John to be a liar.
Paul Watson, the company’s chief technology officer, says: “We choose languages and return the instructions to our clients.” Then our clients have to spend years of experience saying, ‘Do I agree with this technology?’ I think it’s very important in what we build and try to do. We do not want to create machines for the entire universe. ”
It’s okay to worry if professional journalists really need a way to speak out to warn that they can insult someone. Sam Terilli, a professor at the University of Miami’s School of Communication and former chief justice of Sam Terilli, said: “Every good editor or writer, a talented journalist, should know this when he sees it.” Miami Herald. “They should also be able to identify words or verses that are dangerous and that need to be explored in depth.”
This may not always be possible, however, especially during the financial crisis and the pressure to print quickly.
“I think there’s a lot of interest in the media,” says Amy Kristin Sanders, a journalist and professor of journalism at the University of Texas. They also point out the dangers posed by media coverage, when the story may not be well-structured. “For small, medium-sized living rooms – which do not have daily counseling, who can rely on a large number of independent people, and who may not have a small staff, the content in this article will not be more commentary than before, I think such tools can be useful.”