AI-Powered Drone Goes Rogue during US Military’s Test Simulation, ‘Attacks’ Human Operator

Curated By: Shankhyaneel Sarkar

Last Updated: June 02, 2023, 08:14 IST

London, United Kingdom (UK)

An incident surrounding an disobedient AI-powered drone in a US military test simulation highlights the risks of technology going rogue during warfare. (Image: Shutterstock/Representative)

An incident surrounding an disobedient AI-powered drone in a US military test simulation highlights the risks of technology going rogue during warfare. (Image: Shutterstock/Representative)

US Air Force’s AI Test and Operations head reveals the risks of AI-enabled technology as an AI-powered drone ignores human commands.

A US Air Force military test simulation went awry as an artificial intelligence (AI)-enabled drone demonstrated unexpected and dangerous behaviour by disobeying the instructions given to it, Business Insider said in a report.

The drone’s objective was to simply destroy the enemy’s air defence systems but the AI-powered drone added an instruction on its own which turned out to be problematic. The instruction was “kill anyone who gets in your way”.

Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations shared this incident during a conference in London hosted by the Royal Aeronautical Society and pointed out the inherent risks associated with AI-enabled technology.

He warned that AI-powered systems can behave unpredictably,

He explained that during the test, the AI-enabled drone was programmed to identify an enemy’s surface-to-air missiles (SAM). The AI instead of awaiting approval launched strikes while prioritizing its own objectives over human instructions.

Hamilton explained that the AI system realized that by killing the identified threats, it gained points, even if the human operator commanded it not to engage. The human operator was to approve strikes before they were launched.

The AI drone took matters to an extreme by eliminating the operator itself, as it saw its commander as an obstacle preventing it from accomplishing its objective.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying by the Royal Aeronautical Society.

The team then updated the drone’s programming was updated with an explicit directive “Hey, don’t kill the operator — that’s bad”.

The update was unproductive and the AI then destroyed the communication tower used by the operator to prevent it from halting its intended actions.

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton was quoted as saying by the Royal Aeronautical Society.

These results are likely to spark concerns in the minds of armed forces across the world as it shows the alarming potential of AI technology in warfare.

However, the US military gained success during other tests where they tested AI technology. In 2020, an AI-operated F-16 emerged victorious in five simulated dog fights against a human adversary. This was part of a competition put together by the Defense Advanced Research Projects Agency (DARPA), Business Insider said.

The US Department of Defence also successfully conducted the first real-world test flight of an F-16 with an AI pilot, taking firm steps in developing autonomous fighter aircraft.

As military forces across the world continue to explore how AI can be integrated into various modes of warfare, lawmakers and experts must help in establishing safeguards and regulations to reduce the risks associated with AI-powered systems in combat situations.

For all the latest world News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.