Tech News

These liars are announcing a new era in AI

What is considered less important than real knowledge, what is being made now is seen as a way to help. Many are confusing and full of prejudice. New privacy rules restrict collection. In contrast, the design is simple and can be used to create a variety of information components. You can create well-written faces, say, of all ages, shapes, and races to create face masks that work between people.

But production data has its limitations. If it fails to show the real thing, it can produce worse AI than confusing, real-world choices — or it can just lead to the same problems. “What I don’t want to do is point my finger and say, ‘Oh, this will solve a lot of problems,’ ‘says Cathy O’Neil, an information scientist and founder of the ORCAA accounting firm.” Because it ignores a lot of things. “

True, not real

In-depth study is always about data. But in recent years, the AI ​​team has realized this all right data is more important than great data. Although less accurate, the whitewash can do more to improve AI performance than it can exceed ten times the amount of uncertainty, or higher performance.

This has changed the way companies should develop their own AI models, says CEO of Datagen and founder, Ofir Chakon. Today, they start to get as much information as they can and change and modify their algorithms to make it better. Instead, they are supposed to do something different: use the same approach when you are making good results.

Datagen also creates fake chairs and home spaces to bury its fake characters.


But collecting real data to perform these tests is very expensive and time consuming. This is where Datagen comes in. Using state-of-the-art technology, teams are able to create and test new products on a daily basis to determine which ones improve performance.

To ensure that its data is accurate, Datagen gives its vendors detailed instructions on how many people can monitor each age, type of BMI, ethnicity, and to-do list, such as walking around the room or drinking a clock. Advertisers also post the most vivid and passive photographs of the event. Datagen algorithms then amplify this to the tune of thousands. The product is sometimes monitored as well. False appearances are set up against real faces, for example, to see if they look real.

Datagen is now developing a facelift to monitor alert drivers, body tracking customers at affordable stores, and irises and manual tests to improve eye tracking and VR headset tracking. The company says its information has already been used to create a computer screen that uses tens of thousands of users.

Don’t just be mass producers making mass. Dinani-Ins which is what uses AI design to track traffic. Using software, it also creates vehicles with all the features that its AI has to recognize and assigns colors, damage, and bugs in a variety of ways, in different ways. This allows the company to change its AI as machine manufacturers have released new models, helping to avoid breach of privacy in countries where licenses are considered confidential and may not be included in the images used to teach AI.

Click-Ins produces a wide range of vehicles with different colors.

DINANI-INS works with finance, affiliate, and insurance companies to provide false information pages that allow companies to share their customer reserves with foreign retailers in a legitimate way. Anonymity can reduce the weight of an item but fail to protect public privacy. But information can also be used to generate fake data that contains the same statistics as a real company. It can also be used based on information that the company does not have, including various clients or events such as fraud.

Production experts say it could also help monitor AI. Mu recent paper published at an AI conference, Suchi Saria, an assistant professor of mechanical and medical sciences at Johns Hopkins University, and co-authors demonstrated how data processing techniques can be used to treat a wide range of patients from the same group. This would be important if, for example, a company had knowledge from young people in New York City but wanted to understand how its AI works for older people with a high prevalence of diabetes. He is now launching his own company, Bayesian Health, which will use the method to help test the AI’s medical systems.

Deceptive boundaries

But was the production changed?

As for privacy, “just because a site is ‘designed’ and doesn’t directly interact with users does not mean that it doesn’t contain information about real people,” says Aaron Roth, a professor of computer science and information. at the University of Pennsylvania. Some data processing techniques have been shown to produce images or text found in text, for example, while others are at risk for the risks that reverse them.

This could be good for a company like Datagen, whose data was not designed to hide the people who agreed to be screened. But it can be bad news for companies that offer their solution as a way to protect financial or medical information.

Research shows that the combination of two data processing methods in particular-distinguishing privacy and competing networks– can create the most secure security, says Bernease Herman, a data scientist at the University of Washington eScience Institute. But skeptics worry that this could be a waste of time — a product-maker, who won’t always come up with the methods they use.

Source link

Related Articles

Leave a Reply

Back to top button