Tech News

Clearview AI Contains New Visual Alerts

Clearview AI to have it He sparked controversy by posting pictures on the Internet and using them facial recognition giving the police and others an unprecedented opportunity to explore our lives. Now the CEO of the company wants to apply artificial intelligence to make the Clearview lighting tool more powerful.

It can make it dangerous and misleading.

Remove has collected billions of images from affiliate websites Facebook, Instagram, and Twitter and it uses AI to identify the other person in the images. Police and civil servants have used the company’s secret archives to help identify suspects in photographs.

The company’s founder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from the internet – three times more than previously reported.

Ton-This means that a larger image pool means that users, who often follow the rules, can find matches by searching for another. He also claims that the size of the system makes the company’s tool accurate.

Clearview plus online crawling methods, advances in machine learning which has changed facial recognition, as well as the neglect of privacy to create an incredibly powerful weapon.

Ton-This demonstrated professionalism through a smartphone app by taking a photo of a journalist. The program produced a wide range of images from several US and foreign websites, each showing the right person for the images taken over a decade. The attraction of such a tool is obvious, as well as its potential for misuse.

Clearview’s actions sparked public outrage and controversy over privacy expectations in the era of mobile phones, social media, and AI. Critics say the company is violating privacy. ACLU criticized Clearview in Illinois in accordance with a law prohibiting the collection of biometric information; the company is also overseeing class cases in New York and California. Facebook and Twitter have asked Clearview to stop damaging their pages.

Restoration did not deter Ton-That. He also said he believes that many people accept or support the idea of ​​using a face to deal with lawsuits. “People who are worried about this, talk a lot, and that’s a good thing, because I think over time we can deal with a lot of their problems,” he says.

Some of Clearview’s new technologies could spark controversy. Ton-That says it is developing new ways to get police to find someone, including “deblur” and “mask removal” weapons. The first one takes a vague picture and decorates it with the help of a machine to figure out what a clear picture would look like; the second attempts to visualize the covered part of the human face using a machine-learning machine that fills in the details of the image using good imagination based on the figures found in other images.

This can make Clearview’s technology more interesting and challenging. It is not clear how these new techniques really work, but experts say they can increase the risk of misdiagnosis and increase the bias that results from the practice.

“I expect the accuracy to be negative, and even without the accuracy, without careful control of the flow of funds and I expect to expect a lot of unforeseen consequences,” he says. Alexander the Madry, a professor at MIT who works in mechanical engineering. Without care, for example, this method can lead to people who own certain things being misinterpreted.

While the technology is working as promised, Madry says, non-productive systems are complex. “Think of people who hid themselves to take part in peaceful protests or who were outraged to protect their privacy,” he said.

Ton-This experiment has found that new tools support the accuracy of Clearview results. “Any enhanced images should be identified in this way, and extra care is taken in exploring the possible consequences of a magnified image,” he says.


Source link

Related Articles

Leave a Reply

Back to top button