Report Unpacks Dangers of Emerging Military Tech from AI Nukes to Killer Robots
ARTIFICIAL INTELLIGENCE-AI, 15 May 2023
Brett Wilkins | Common Dreams - TRANSCEND Media Service
“While the media and the U.S. Congress have devoted much attention to the purported benefits of exploiting cutting-edge technologies for military use, far less has been said about the risks involved.”
Emerging technologies including artificial intelligence, lethal autonomous weapons systems, and hypersonic missiles pose a potentially existential threat that underscores the imperative of arms control measures to slow the pace of weaponization, according to a new report published Tuesday.
The Arms Control Association report—entitled Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability—”unpacks the concept of ’emerging technologies’ and summarizes the debate over their utilization for military purposes and their impact on strategic stability.”
The publication notes that the world’s military powers “have sought to exploit advanced technologies—artificial intelligence, autonomy, cyber, and hypersonics, among others—to gain battlefield advantages” but warns too little has been said about the dangers these weapons represent.
“Some officials and analysts posit that such emerging technologies will revolutionize warfare, making obsolete the weapons and strategies of the past,” the report states. “Yet, before the major powers move quickly ahead with the weaponization of these technologies, there is a great need for policymakers, defense officials, diplomats, journalists, educators, and members of the public to better understand the unintended and hazardous outcomes of these technologies.”
A new @ArmsControlNow report assesses the extent to which military use of emerging tech could result in an accidental use of nuclear weapons in a crisis, and provides a framework for curtailing the indiscriminate weaponization of such tech.
Available at https://t.co/gPyDbcaOcd pic.twitter.com/fw55Gczx23
— Arms Control Assoc (@ArmsControlNow) February 7, 2023
Lethal autonomous weapons systems—defined by the Campaign to Stop Killer Robots as armaments that operate independent of “meaningful human control”—are being developed by nations including China, Israel, Russia, South Korea, the United Kingdom, and the United States. The U.S. Air Force’s sci-fi-sounding Skyborg Autonomous Control System, currently under development, is, according to the report, “intended to control multiple drone aircraft simultaneously and allow them to operate in ‘swarms,’ coordinating their actions with one another with minimum oversight by human pilots.”
“Although the rapid deployment of such systems appears highly desirable to many military officials, their development has generated considerable alarm among diplomats, human rights campaigners, arms control advocates, and others who fear that deploying fully autonomous weapons in battle would severely reduce human oversight of combat operations, possibly resulting in violations of international law, and could weaken barriers that restrain escalation from conventional to nuclear war,” the report notes.
The latter half of the 20th century witnessed numerous nuclear close calls, many based on misinterpretations, limitations, or outright failures of technology. While technologies like artificial intelligence (AI) are often touted as immune to human fallibility, the research suggests that such claims and hubris could have deadly and unforeseen consequences.
“The major powers are rushing ahead with the weaponization of advanced technologies before they have fully considered—let alone attempted to mitigate—the consequences of doing so.”
“An increased reliance on AI could lead to new types of catastrophic mistakes,” a 2018 report by the Rand Corporation warned. “There may be pressure to use it before it is technologically mature; it may be susceptible to adversarial subversion; or adversaries may believe that the AI is more capable than it is, leading them to make catastrophic mistakes.”
While the Pentagon in 2020 adopted five principles for what it calls the “ethical” use of AI, many ethicists argue the only safe course of action is a total ban on lethal autonomous weapons systems.
Hypersonic missiles, which can travel at speeds of Mach 5—five times the speed of sound—or faster, are now part of at least the U.S., Chinese, and Russian arsenals. Last year, Russian officials acknowledged deploying Kinzhal hypersonic missiles three times during the country’s invasion of Ukraine in what is believed to be the first-ever use of such weapons in combat. In recent years, China has tested multiple hypersonic missile variants using specially designed high-altitude balloons. Countries including Australia, France, India, Japan, Germany, Iran, and North Korea are also developing hypersonic weapons.
DARPA’s HAWC program is a wrap…concluding with a successful @LockheedMartin #hypersonic missile flying more than 300 nautical miles and lots of data for the @usairforce. More: https://t.co/Yqq2Xl50jn pic.twitter.com/ilNN4xz0z4
— DARPA (@DARPA) January 30, 2023
The report also warns of the escalatory potential of cyberwarfare and automated battlefield decision-making.
“As was the case during World Wars I and II, the major powers are rushing ahead with the weaponization of advanced technologies before they have fully considered—let alone attempted to mitigate—the consequences of doing so, including the risk of significant civilian casualties and the accidental or inadvertent escalation of conflict,” Michael Klare, a board member at the Arms Control Association and the report’s lead author, said in a statement.
“While the media and the U.S. Congress have devoted much attention to the purported benefits of exploiting cutting-edge technologies for military use, far less has been said about the risks involved,” he added.
The report asserts that bilateral and multilateral agreements between countries that “appreciate the escalatory risks posed by the weaponization of emerging technologies” are critical to minimizing those dangers.
“As an example of a useful first step, the leaders of the major nuclear powers could jointly pledge to eschew cyberattacks” against each other’s command, control, communications, and information (C3I) systems, the report states. A code of conduct governing the military use of artificial intelligence based on the Pentagon’s AI ethics principles is also recommended.
“If the major powers are prepared to discuss binding restrictions on the military use of destabilizing technologies, certain priorities take precedence,” the paper argues. “The first would be an agreement or agreements prohibiting attacks on the nuclear C3I systems of another state by cyberspace means or via missile strikes, especially hypersonic strikes.”
“Another top priority would be measures aimed at preventing swarm attacks by autonomous weapons on another state’s missile submarines, mobile ICBMs, and other second-strike retaliatory systems,” the report continues, referring to intercontinental ballistic missiles. “Strict limitations should be imposed on the use of automated decision-support systems with the capacity to inform or initiate major battlefield decisions, including a requirement that humans exercise ultimate control over such devices.”
“Without the adoption of measures such as these, cutting-edge technologies will be converted into military systems at an ever-increasing tempo, and the dangers to world security will grow apace,” the publication concluded. “A more thorough understanding of the distinctive threats to strategic stability posed by these technologies and the imposition of restraints on their military use would go a long way toward reducing the risks of Armageddon.”
__________________________________________
Brett Wilkins is a staff writer for Common Dreams.
Go to Original – commondreams.org
Tags: Arms Industry, Arms Race, Artificial Intelligence AI, Autonomous Weapons, Intelligent Weapons, Killer robots, Pentagon, Robots, War Economy, Warfare
DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.
Read more
Click here to go to the current weekly digest or pick another article:
ARTIFICIAL INTELLIGENCE-AI: