Skip to main content

CU engineering faculty respond to lethal autonomous weapons pledge

Over two thousand researchers in artificial intelligence and robotics, from both academia and industry, recently signed a pledge to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” The pledge defines these weapons as "selecting and engaging targets without human intervention,” and one of its key tenets is that “the decision to take a human life should never be delegated to a machine.” We are glad to see this pledge eliciting a broad conversation about the future of AI and its potential to transform life as we know it with powerful tools, both violent and nonviolent. Yet we would like to share two observations which illustrate the nuances of this issue.

First, even if researchers agree not to directly develop AI for autonomous weaponry, their efforts may still be advancing the capability and danger of such weapons. Second, how to regulate AI technology is far from clear and, as a community, we must augment the pledge’s call for regulation with a concrete plan for doing so sooner rather than later. To sum up both observations with a single example, consider a company that releases an autonomous photography suite: an off-the-shelf quadcopter drone equipped with a camera and advanced software to take photos at the desired distances and angles. You simply open up an app on your phone, snap a quick photo of the subject, and launch the drone, which uses facial or object recognition to identify the subject, and state-of-the-art stabilization to take magazine-quality photos. This may seem like a benign application of modern AI, and yet, one may quickly realize the fine line it walks. 

A modification to trigger other hardware instead of a camera, which is not simple but could be done without altering the software, could turn this exciting technology into a dangerous weapon. So, was the autonomous photography company unwittingly developing an autonomous lethal weapon? Did they leave the decision of taking a human life to an AI, or just taking a photograph? Should regulation of AI software prohibit this type of application altogether? Can the researchers or developers of such tools - or even the regulators - anticipate the possible downstream uses of this technology? These are questions that we must address even as the pledge gains an impressive following. Let us dive into both of our points a bit deeper. 

For the first, while targeting capability is a defining property of lethal autonomous weapons, it is only a single component, and one that has likely already been developed to some degree by many parties. Creating an automatic weapon targeting system using publicly available examples and hardware is possible with minimal effort or training, even if it doesn’t work perfectly. Meanwhile, researchers in industry, government, and academia are rapidly developing technology to grant full autonomy to vehicles like cars, planes or human-like robots. The ingredients needed for this autonomy range from core theory in machine learning (a subset of AI) to accurate decision-making algorithms. Hundreds of thousands of researchers and developers are perfecting these techniques in order to achieve autonomy in these various platforms, likely with no desire to develop weapons of any kind. Yet, many of these techniques could potentially become crucial components of a lethal autonomous weapon. So, if proliferation of these weapons is a chief concern, ceasing work on a narrow slice of such weapons, like targeting, may not have much impact.

As for the second point, the regulation of AI is a technically and socially challenging suggestion for which, to our knowledge, a reasonable and enforceable procedure has not yet been proposed. The weaponization of AI is fundamentally different from, for example, the weaponization of nuclear physics. The raw material is code, the refinement processes a computer program, and weaponization could be as simple as buying some parts online and working at a home tool bench. Many recognize that a stove-piped approach to regulating AI is simply not feasible because there is no obvious bottleneck in this process where inspections or auditing could reasonably be implemented. As a result, suggested solutions often fondly invoke Isaac Asimov’s three laws of robotics, with many asking “why can’t we enforce behaviors in AI?” The reality of how easily source code could be modified to remove such safeguards, and how widely prevalent AI already is in everyday life, calls into question the effectiveness of this approach. If AI is to be regulated, we need a reasonable and enforceable framework to do so.

The AI community is coming to terms with the reality that our research could potentially be used by those with malicious intent. One natural reaction to this reality is a statement of intention and call for regulation as in the lethal autonomous weapons pledge. To be truly effective, however, the pledge must be met with significant efforts toward developing a regulatory framework. Such a framework will likely require input from experts all across AI, computer security, policy, and more. And time is ticking: as we are currently seeing with the debate over 3D printed guns, this framework should be in place before, and not after, these threats become a reality.

Christoffer Heckman is an assistant professor and the Jacque Pankove Faculty Fellow in the Department of Computer Science at the University of Colorado College of Engineering and Applied Science in Boulder. He is also a member of the Autonomous Systems Interdisciplinary Research Theme at CU. His research focuses on autonomous perception and control for experimental robotics.

Rafael Frongillo is an assistant professor in the Department of Computer Science at the University of Colorado College of Engineering and Applied Science in Boulder. His research focuses on theoretical machine learning and economics.
 

A drone in the woods