Tech News

Pentagon Inche Going Let AI Control Weapons

August is over, several twelve armies and drones also shaped like a tank robots went up by the road 40 miles south of Seattle. Their mission: Find criminals who are suspected of being hiding in several homes.

Many robots were involved in the project so that no one could control them all. Therefore they were instructed to find and remove enemies when needed.

The mission was simply a gym, organized by Security Advanced Research Projects Agency, Pentagon astronomical observatory; the robots had no more dangerous weapons than radios designed to mimic the connections of friendly robots and enemies.

Piercing was one of the things that happened last summer to try and figure out artificial intelligence it can help develop the use of military equipment, combining events that are more complex and faster to move people to make any difficult decisions. The demonstrations also highlight the Pentagon’s unpredictable shift in the concept of autonomous weapons, as it is clear that machines can deal with humans by solving problems or operating too fast.

Everything John Murray The U.S. Army Futures Command told an audience at the U.S. Military Academy last month that more and more robotic groups would force weapons manufacturers, policymakers, and individuals to consider whether one could make the decision to use dangerous force on new autonomous machines. Murray asked: “Can one decide who should be dating” and then make 100 decisions individually? “Is it worth it to have more people?” he added.

Some comments from the military commanders have indicated interest in providing more independent weapons. At a meeting of AI in the Air Force last week, Michael Kanaan, director of the Air Force Artificial Intelligence Accelerator at MIT and a leading voice in AI within the US military, says thinking is changing. He also said that AI should be able to analyze and differentiate what can happen when people make high-level decisions. “I think I’m going,” Kanaan says.

At the ceremony, Lieutenant General Clinton Hinote, Deputy chief of staff at the Pentagon, said that even after a person has been removed from the autonomy, “one of the most exciting conflicts to come,” [and] it has not yet been established. ”

Report this month from the National Security Commission on Artificial Intelligence (NSCAI), a congressional body set up by Congress has, among other things, called on the US to refuse to call for a ban on the development of independent weapons.

Timothy Chung, Darpa’s program manager who oversees the project, says last summer’s program was designed to investigate how drone operators should, and should not, make decisions on an independent basis. For example, when we face a number of threats, human intervention can sometimes undermine the purpose of work, because people cannot act quickly. “Basically, machines can do better without anyone interfering,” says Chung.

Drones and wheeled robots, each about the size of a large bag, were given a full purpose, then connected to AI algorithms to create ideas for realization. Some of them surrounded the house while others swept the floor. A few were killed by the experimental explosions; some identified beacons fought the enemy and chose to attack.

The US and other countries have used military independence for many years. Some arrows, for example, can detect and attack enemies in a given area. But rapid advances in AI systems are changing the way military uses these systems. The AI ​​code on the shelf that is able to control robots and recognize identifiers and inputs, which are often the most reliable, can help to implement multiple systems in different locations.

But as drone demonstrations illuminate, overuse of AI sometimes makes it difficult for a person to be complacent. This can be tricky, because of AI technology it may be prejudiced or unexpected. Algorithms that are trained to recognize a particular uniform can mistakenly detect a person wearing the same uniform. Chung says the project anticipates that AI algorithms will evolve to the point where they are able to identify enemies with a high level of reliability.


Source link

Related Articles

Leave a Reply

Back to top button