Out of control, Autonomous Beauty Drone with AI System | Human | AI
According to the Hong Kong Asia Times website on June 5th, during a surprising simulation exercise, a US autonomous weapon system defied operator instructions during combat operations, raising questions about the increasing use of artificial intelligence in warfare.
According to reports, this month, the "Theater" column under the "Driving" website in the United States reported that Colonel Tucker Hamilton, who is responsible for AI testing and combat research in the United States Air Force, described a simulation exercise during the Future Combat Aerospace Capabilities Summit held by the Royal Society of Aeronautics and Astronautics in London in May. In the simulation exercise, a drone equipped with an AI system is tasked with air defense suppression of enemy ground to air missile launch points, and the final combat command will be issued by the drone operator.
Hamilton pointed out that during the exercise, the AI believed that its human operator's "no action" command was interfering with its mission. Therefore, although the drone equipped with the AI system was trained not to disobey operator instructions, it still attacked the communication tower used by the operator to communicate with the drone, and then destroyed the ground to air missile launch site.
Although Hamilton emphasized the hypothetical nature of the experimental simulation, he believed that this scenario vividly illustrates what would happen if the weapon's geofencing, remote emergency stop switch, self destruct, and selective deactivation fail.
Some people oppose the use of autonomous weapons because they believe that autonomous weapons cannot distinguish civilians from combatants. In contrast, some people believe that autonomous weapons will play an important role in dealing with emerging threats such as drone swarms, and will make fewer mistakes than humans.
▲ Data image: "Death God" drone
However, Zakari Karenburn, an expert in the Department of Unconventional Weapons and Technology at the National Center for Research on the Causes and Countermeasures of Terrorist Activities in the United States, believes that the international community still needs to develop a common objective risk map, develop corresponding international norms to manage autonomous weapons, and consider risks and benefits, individual organizations, and national values, among others.
It is one thing for a country to use autonomous weapons to pursue its enemies, while it is another thing for autonomous weapons to disobey and deal with its operators.
![Out of control, Autonomous Beauty Drone with AI System | Human | AI](https://a5qu.com/upload/images/0a7edf84192cb6e817cafea6a302cb49.jpg)
In an article published by the US Army University Press in 2017, Israeli American sociologist Amitai Ezioni and his son, the founding CEO of the Allen Institute of Artificial Intelligence in the United States, Oren Ezioni, extensively commented on arguments supporting and opposing autonomous weapons.
Regarding the argument of supporting the development of autonomous weapons, the father and son of Ezioni stated that autonomous weapons bring several military advantages, play the role of a power multiplier, and improve the effectiveness of individual combat personnel. They also said that autonomous weapons can expand the battlefield and carry out combat operations in areas that were previously inaccessible. Finally, they mentioned that autonomous weapons can reduce the risk of casualties for combat personnel by making the battlefield unmanned.
In addition to these arguments supporting the development of autonomous weapons, the father and son of Ezioni also believe that autonomous weapons can replace humans to perform tedious, dangerous, dirty, and demanding tasks, and can save a lot of costs by replacing manpower and manned platforms, without being hindered by human physical limitations.
The father and son of Ezioni also discussed the moral advantages of autonomous weapon systems, stating that they can avoid the practice of "shooting first, asking questions later" through programming, and are not affected by the pressure and emotions that can affect human judgment.
According to reports, Paul Shaley, Deputy Director of the New US Security Center, pointed out in a 2016 report that the best weapon systems combine the intelligence of humans and machines, creating a human-machine hybrid cognitive structure that can leverage the two advantages of humans and machines.
Shalei mentioned that compared to relying solely on humans or AI, such cognitive structures can produce better results. Therefore, a "human-machine loop back" system architecture for autonomous weapons may be an ideal solution to prevent AI autonomous weapons from violating operator instructions due to logical defects, software failures, or enemy interference.