Tech News

Pentagon Strengthens Its AI-By-Self-Cutting Methods

[ad_1]

The Pentagon sees it artificial intelligence as a way to bring out, chase, and control enemies in the future. But the instability of AI means that without proper care, technology can give enemies a new way to attack.

The program of The Intelligent Design Section, Developed by the Pentagon to help U.S. forces use AI, recently formed a team to collect, number, and distribute the types of training and machine tools in teams at the Department of Defense. One of these experiments illustrates the serious problem of using AI on a battlefield. The “red group” study machine, called the Test and Lighting Group, will investigate the experimental types of weaknesses. Another cyber security team monitors AI numbers and notifications on hidden vulnerabilities.

Machine learning, modern AI technology, represents a different, often more powerful, computer-aided method. Instead of simply stating the rules that the machine will follow, machine learning sets its own rules for learning from data. The problem is that this study, combined with past or present errors in the study data, can cause AI species to do strange or unexpected things.

“In some cases, the machine learning program is just a billion times better than traditional programs,” says Gregory Allen, director of strategy and policy at JAIC. But he adds that, machine learning “is also more seamless than traditional programs.”

Machine learning machines that detect certain vehicles in satanic images, for example, can also learn to combine the vehicle with a certain type of surrounding area. The enemy can fool AI by changing the environment around his cars. With the opportunity to discover what they are teaching, the enemy may be able to plant images, such as other symbols, that could undermine the process.

Allen says the Pentagon is following strict rules regarding reliability and security of the program it uses. He also said the approach could be extended to AI and machine learning, and saw that JAIC was working to meet DoD standards around the program to incorporate complex learning-related challenges.

AI is changing the way other businesses work because it can be a powerful and powerful way to transform its operations and processes. Instead of writing aligorivimu predicting what customers will buy, for example, a company can have a way to connect AI to tests or millions of previous sales and create its own predictable brand of who to buy.

The U.S. and other military forces are seeing similar benefits, and are rushing to use AI to organize, recruit intellectuals, job preparation, and weapons technology. China’s technical development has accelerated the Pentagon’s zeal for AI. Allen says DoD is moving “in a way that prioritizes security and reliability.”

Researchers are developing more and more ways to destroy, disrupt, or disrupt AI systems in the wild. In October 2020, researchers in Israel showed Carefully stored images can disrupt AI systems that allow Tesla to interpret the way it is coming. Such “counter-attacks” also include embedding machine learning to detect small changes that cause large errors.

Dawn Songs, a professor at UC Berkeley who also conducted experiments similar to Tesla’s sensors and other AI systems, says the rise of machine learning systems is already a problem in place such as fraud detection. Other companies provide tools for testing AI systems economic activities. “Naturally there is a fighter who wants to evade the system,” he says. “I think we’ll see more of that kind.”

A simple example of Tay-related machine learning tools, Microsoft’s bad communication tools, launched in 2016. The bot used an algorithm that learned how to answer new questions by researching previous conversations; Redditors quickly realized they could use it this is for Tay to throw anti-messages.

[ad_2]

Source link

Related Articles

Leave a Reply

Back to top button