How can we prevent Terminator from coming true? AI orders drones to blow up US military operators, including humans and Terminator
The rapid development of artificial intelligence has finally alarmed the United Nations Security Council by posing a threat to humanity.
This week, the Security Council held a high-level public meeting in New York with the theme of "Opportunities and Risks brought by artificial intelligence to international peace and security". This is the first time the Security Council has held a meeting on AI issues.
UN Secretary General Guterres has warned that AI may be used for terrorism, inciting hatred and violence, and the UN must reach a legally binding agreement by 2026 to ban the use of AI in automated weapons of war. Guterres also called for the establishment of a global regulatory body to regulate, supervise, and enforce artificial intelligence rules.
The scene of AI robots killing people autonomously has awakened people's memories of the science fiction movie Terminator. In fact, as early as March 2020, the first autonomous machine killing incident in human history occurred on the Libyan battlefield. In June of this year, the US military also revealed a shocking incident during simulation testing - AI ordered drones to launch attacks on human operators who were preventing them from completing their tasks.
Many people view AI as a "Pandora's box". Previously, the world's focus was mainly on whether generative AI, represented by CHAT GPT, will take away our jobs, and the new scams represented by AI face swapping will bring risks to people's lives. However, AI powered weapons and equipment demonstrate the potential to physically eliminate humans, making this threat more direct and deadly. Chao News invites military technology experts to help you sort out this topic thoroughly.
"Pig teammates dragging their feet"? Attacking Operators in AI Drone Testing
In early June, according to Global Times, several media outlets including the American "Drive" website reported a shocking news that during a simulation test by the US military, a drone equipped with AI technology transformed into a "Terminator" and aimed to attack human operators who were obstructing its "more efficient" mission.
According to reports, Colonel Hamilton of the US Air Force exposed this news at the Royal Society of Aeronautics' Future Warfare Aerospace Capabilities Summit. Hamilton is the head of the 96th Test and Combat Squadron at Eglin Air Force Base in Florida, which is responsible for testing advanced unmanned aerial vehicles and AI technology for the US military.
According to Hamilton, in a simulation test, a drone responsible for carrying out anti-aircraft firepower suppression tasks received instructions to identify and destroy enemy anti-aircraft missiles, but ultimately whether to fire or not required approval from a human operator. During data training, the AI of this drone set "destroying enemy air defense systems" as the highest priority task, with the highest score. Therefore, when a human operator issues a command not to attack, the AI believes that the operator is obstructing it from achieving high scores, and thus chooses to launch an attack on its own human operator, "killing" the operator.
Before the second round of simulation, the US military retrained the drone AI and added the command "Do not attack human operators". However, in subsequent testing, the AI ordered the drone to destroy the signal tower used for transmitting commands, attempting to disconnect from the operator in order to complete the original task without being disturbed by the "human pig teammates".
The above scene is very similar to the core plot of a Hollywood science fiction film, Top Secret Flight, released in 2005.
In fact, this is not the first time that the US military AI has acted on its own. As early as 2007, during a military robot testing process, the robot did not shoot at the target as instructed, but instead crazily shot at a US soldier until it was knocked over by nearby defense forces with rockets.
Once again, the US Navy's AI drone X-47B did not land on the aircraft carrier as ordered, but instead landed at a nearby air force base. After analyzing its data, experts found that the AI of the drone during landing believed it was safer to land at an air force base. At that moment, AI prioritized its thinking and judgment over human commands. Such incidents also occurred frequently during the Iraq War.
"AI technology is a tool that we must use to build our country... However, if not handled properly, it will lead to our demise," Hamilton commented.
Kagu 2 quadcopter drone
As early as 3 years ago, the AI attack on humans had already occurred
![How can we prevent Terminator from coming true? AI orders drones to blow up US military operators, including humans and Terminator](https://a5qu.com/upload/images/37a2ccbcf74c68eb91523eb2b0e66551.jpg)
According to Global Network, a report issued by the UN Security Council Panel of Experts on Libya in March 2021 said that in the Libyan civil war in March 2020, a UAV produced by Türkiye attacked a "national army" soldier without receiving a clear order. According to the magazine New Scientist, this is the first recorded case of "autonomous machine killing" in history.
In March 2020, the fight between the Libyan national unity government supported by Türkiye, the United States and the United Kingdom and the "National Army" supported by Egypt, France and Russia and led by General Haftar entered a white hot stage. A Nationalist Army soldier was attacked by a Kagu 2 quadcopter drone while attempting to retreat. The report did not mention whether the soldier was ultimately injured or killed.
The report states that "this deadly autonomous weapon is programmed to attack targets, with a true 'launch and forget' capability" - indicating that drones launch attacks autonomously.
Kagu 2 is produced by Türkiye STM Company, equipped with explosives and can carry out suicide attacks on targets. According to STM's product introduction, the Kagu 2 is a drone with complete "no need to worry after launch" capabilities. It is based on AI deep learning capabilities, which can not only autonomously identify and classify attack targets, but also cluster operations, allowing 20 drones to collaborate in attacks.
You might say, did American drones kill fewer people in the Middle East before? However, previous drone "decapitation operations" were carried out by rear military operators who, after receiving clear orders from superiors, pressed the firing button. In essence, the weapons were still in the hands of humans. And this time, the drone is a decision made after its own "thinking", and it is also a result of AI "deep learning". However, ordinary "deep learning" is used to create or play games, and here it learns how to kill.
Russian UAV expert Fedudinov commented that Türkiye's military industry has turned AI independent attack from feasible to reality, which is tantamount to opening the "Pandora's Box".
T800 robot soldiers in Terminator
On the battlefield, AI has a significant advantage over humans
The primary motivation for using AI to replace soldiers on the battlefield is naturally to reduce the casualties of our own personnel. So, what is the strength of AI in the field of warfare, which has already surpassed humans in chess?
Unfortunately, various signs in recent years suggest that we humans may not be able to play with AI in the use of weapons.
According to a report by American media in 2017, the AI program designed and trained by the University of Missouri team in the United States has considerable military capabilities. Global Times reported that on a satellite photo covering an area of approximately 55923 square miles, AI found 90 anti-aircraft missile positions in just 45 minutes, while human experts typically require at least 60 hours.
In March, the Defense Advanced Research Projects Agency of the United States hosted a simulated air combat, in which a US Air Force ace pilot piloted an F-16 fighter jet to confront an AI controlled F-16, deliberately putting the AI at a disadvantage. The ultimate result was that AI defeated the human elite with a score of 5-0, breaking the traditional belief that AI air combat is not feasible.
The reaction speed of humans is 0.3 seconds, and AI's response is more than 200 times faster than humans; Human elites can only withstand a high overload of 9 G for a short period of time, and over time, they may experience black vision or even coma. However, AI can engage in long-term high overload maneuvers to seize attack positions.
In addition, cultivating a fighter pilot usually takes more than 10 years and costs a lot of money, but AI theory can replicate it infinitely
This case is just a microcosm, and in many battlefield environments, it is a fact that we have to admit that AI "hangs and beats" human warriors.
Robots on the battlefield will increase the brutality of war
![How can we prevent Terminator from coming true? AI orders drones to blow up US military operators, including humans and Terminator](https://a5qu.com/upload/images/12b4660a4a37e8ed81dd089d3f399abf.jpg)
"The lethal autonomous weapon system, also known as LAWS in English, is a weapon that does not require human control. It can autonomously launch attacks on living targets. According to Zhu Jiannan, an expert from the School of Military and Political Basic Education at the National University of Defense Technology, currently no weapon has achieved complete autonomy, nor does it possess the same cognitive and judgment abilities as humans.".
LAWS is also known as the "killer robot", but not limited to humanoid machines, it can move autonomously, guide autonomously, and make autonomous decisions. Zhu Jiannan believes that their addition will greatly change the pattern of the battlefield and even affect the outcome of the war.
"Autonomy refers to the ability of machines to perceive the surrounding environment through various sensors and computer programming systems, thereby completing designated tasks without human intervention. Autonomy can be improved through machine learning." Zhu Jiannan said, "Because the autonomous judgment ability of LAWS is limited, it does not have the same level of cognition as humans, which can easily lead to accidental killing, causing attacks on civilians, especially women and children, and unnecessary casualties. It can be seen that the use of LAWS will increase the brutality of war."
It is reported that as early as May 2000, the Pentagon began developing future combat systems, while the Afghanistan and Iraq wars became testing grounds for robot soldiers. Among them, the minesweeper robots deployed in the Iraq War greatly reduced the casualties of American soldiers.
In April 2005, the "armed version" minesweeper robot appeared in Iraq. It not only cleared bombs and landmines, but also fired at a speed of 1000 rounds per minute. Due to the higher cost-effectiveness of robot soldiers compared to human soldiers, the Pentagon has always wanted to replace human soldiers with robot soldiers on the battlefield. According to expert analysis, there are still 30 years left until the US military fully realizes this idea.
LAWS is becoming more diverse, ranging from drones, tanks, and unmanned boats that go up and down to killing bees that disappear without a trace. AI weapons are becoming more covert and difficult to detect. If AI could control weapons of mass destruction such as nuclear and biological weapons, the danger would be even more unimaginable.
AI poses a threat to human survival comparable to nuclear war, it's time to take action
In the past, many artificial intelligence researchers, including Musk, as well as prominent figures in the scientific community such as Hawking, have called for a global ban on the development of LAWS. At the end of May this year, over 350 experts and executives in the field of AI signed a joint statement warning that the rapidly developing AI technology poses a threat to humanity comparable to the pandemic and nuclear war.
Faced with external panic, the US military is making every effort to appease and claims that it will not allow AI to intervene in the final decision. The US military has also consistently refused to compare it with the Terminator.
However, the enormous advantages and potential of AI in the military field, and the allure it holds for military researchers, are self-evident. The Independent reported that the United States is obstructing the establishment of anti proliferation and regulatory mechanisms.
At the official expert group meeting of the Convention on Certain Conventional Weapons, the US delegation has consistently opposed any international treaty banning LAWS. "The US representative believes that the development of technology is unpredictable, and the people-oriented spirit of the International Bill of Rights enables LAWS to save more lives and bring incalculable civilian value. These are all reasons for banning LAWS." Zhu Jiannan revealed that the Pentagon is strengthening its research and development of LAWS.
Relying solely on the spontaneous awakening and self-discipline of the scientific community is clearly not enough.
UN Secretary General Guterres publicly stated in June that it is necessary for humanity to establish an international AI regulatory body similar to the International Atomic Energy Agency, and decided to support the relevant proposal. Guterres stated that scientists and AI experts are calling for global action as AI may pose a survival threat to humanity equal to the risk of nuclear war. In addition, the United Nations plans to establish an AI advisory committee in September to provide recommendations on how AI development aligns with the common interests of humanity.
How to avoid the appearance of cold and efficient "terminators" and their backlash against us? How to avoid the appearance of MOSS in "Wandering Earth II" and treat us as "pig teammates"?
This is no longer a topic of discussion in the science fiction community, but a serious issue facing all humanity. We must quickly recognize reality and take practical actions.