‘Robocop,’ the AI race, and ethics of creating killer robots
Any discussion about robots and Artificial Intelligence (AI) always seems to turn on the notion that one day the Earth will be overrun by a modern-day “Terminator.” That was 1984, when there was little to any serious discussion about ethics and AI. And it’s the wrong comparison. The movie that actually nailed the ethics part of AI was “Robocop” in 1987.{mosads}
The film’s confrontation between the human-infused Robocop and the fully-automated Enforcement Droid, Series 209 (ED-209) proved the most salient point about robotics and AI —you can’t automate morality or ethics. There still needs to be a human in the loop.
But there are some countries that want to change the face of future warfare by automating it.
The Russian space agency Roskosmos has approved a preliminary plan to send two Final Experimental Demonstration Object Research (FEDOR) robots to the International Space Station by this August in an unmanned Soyuz spacecraft. Not as cargo, but as pilots. There are numerous benefits to using robots in space, primary among them is reducing the risk to human life.
But then a YouTube video appeared.
In what can only be described as an extremely eerie imitation of Robocop, FEDOR wields weapons in each hand and displays lethal accuracy during target practice. It didn’t take long for the international community of suppliers to react. Critical components were cut off, and the engineering team was forced to develop their own parts. They learned by watching and even visiting the United States. The Defense Advanced Research Projects Agency (DARPA) Robotics Challenge provided additional insights on solving the supplier problem.
All of this begs the question of why does a robot in space need to be armed? Because it’s not about space. It’s about Earth and the next generation of conflict. Lethal Autonomous Weapons Systems (LAWS) will form the core capability for Russia and China. That means a weapons system could identify, target and kill without human input or intervention.
China has made it abundantly clear they intend to dominate quantum computing, AI and robotics by 2030. Vladimir Putin put a finer point on it when he said “the one who becomes the leader in [artificial intelligence] will be the ruler the world.”{mossecondads}
Countries around the globe are struggling to address the use of LAWS. The United Nations, in their typical bureaucratic inefficiency, has convened numerous meetings called the Convention on Certain Conventional Weapons (CCW). In 2013, the CCW Meeting of High Contracting Parties passed a new mandate on LAWS. The mandate focused on “questions related to emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the Convention.” Six years later, the participating nation states can’t even agree on a shared definition of “what a lethal autonomous weapons system is.”
If you can’t define the problem, you certainly can’t solve it.
In the meantime, China and Russia continue building their systems while we watch… and help.
Even though Google threw our Department of Defense under the bus on a critical AI project named Maven, which was designed to ease the tremendous burden of having humans sit in front of computer screens and review massive amounts of drone video for actionable insights, Google apparently isn’t against working with the Chinese government on their development of AI.
President Xi has left no doubt as to China’s intentions. In June of 2017, China released an aggressive plan that seeks to grow their AI development to $59 billion by 2025. Their aim is clear, as well as their targets: The United States, Google and Microsoft. While Russia and China have few moral qualms about introducing AI into weapons systems, the United States is struggling with the ethical dilemma that underpins the discussion.
The U.S. Army recently announced a Request for Proposal (RFP) under the Advanced Targeting and Lethality Automated System, or ATLAS, program. The Army Combat Capabilities Development Command (CCDC) is pursuing the ability to “develop autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process. The ATLAS will integrate advanced sensors, processing, and fire control capabilities into a weapon system to demonstrate these desired capabilities.”
The minute this RFP hit the street, a news article from Quartz titled “The US Army wants to turn tanks into AI-powered killing machines” caused an about-face. The RFP was updated, and new language was added to ‘clarify’ the intent:
“All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DOD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DOD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DOD legal and ethical standards.”
DOD Directive 3000.09 clearly states: “It is DoD policy that: a. Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Back to keeping a human in the loop. But the real question is whether anyone else will.
What happens when China deploys a complete army of fully autonomous killer tanks? What do we do when Russia has an army of 1,000 pistol-packing FEDOR robots? Will we default back to DOD Directive 3000.09 and claim the moral high ground?
Paraphrasing General George Patton, no one ever won a war by dying for their country. Wars are won by making the other guy die for his.
In the future, if our soldiers, sailors, airmen, marines and coast guard are the only ones dying, will the moral high ground still be worth it?
Morgan Wright is an expert on cybersecurity strategy, cyberterrorism, identity theft and privacy. He previously worked as a senior advisor in the U.S. State Department Antiterrorism Assistance Program and as senior law enforcement advisor for the 2012 Republican National Convention. Follow him on Twitter @morganwright_us.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..