Human-Machine Teaming: SPL Research Asks How Law and Ethics Can Best Regulate AI 

By Matthew Mittelsteadt G’20, AI Research Fellow, SPL

INSTITUTE FOR SECURITY POLICY AND LAW 

We are amidst an artificial intelligence (AI) revolution. If the last decade was the dawn of the “Age of AI,” then this decade has seen the technology mature as it has begun to be widely deployed. Its growth and use in the next few years will be exponential. However, the use of AI opens a Pandora’s box of legal and security challenges. The law has yet to catch up. 

Led by the Hon. James E. Baker and Professor Laurie Hobart, Institute for Security Policy and Law (SPL) researchers are currently exploring these challenges—and trying to bridge the gap between AI reality and AI regulation—funded by a research grant from the Center for Security and Emerging Technologies (CSET). 

Our focus: Ethical decision-making, bias, and data regulation so that the national security community can maximize the benefits of AI and minimize and mitigate the risks.

The central question of our research is posed in Baker’s landmark book, The Centaur’s Dilemma: National Security Law for the Coming AI Revolution: What is the appropriate mix of human and AI decision-making?

This is the puzzle known as the “Centaur’s Dilemma.” Just as a centaur is part man and part horse, with AI we must ask the question with each AI application what part should be machine-driven and what part reserved for human decision. The dilemma is in reaping the benefits of operating at machine speed with machine capabilities while maintaining appropriate legal and ethical human control. 

SPL Publications: Breaking New Ground

As nearly every AI legal and policy question involves a variant of the Centaur’s Dilemma—and recognizing that policymakers have done little to address AI up until now—SPL research sets out to determine how law and policy can be applied to make AI more accurate and effective while also maintaining necessary human control.  

“Twenty-first-century lawyers will need

to understand the constellation of technologies

known as AI, or they will be left behind.”

We recognized that the answer must start with Socratic inquiry, asking questions such as: What is the purpose? Where is the data from? Is there bias? What laws, if any, can we use to guide AI regulation? And where do gaps exist? 

In his policy paper, “A Defense Production Act (DPA) for the 21st Century,” Baker addresses these questions by turning to the US Code, noting that there are few statutes that explicitly map federal AI authority. To fill this void, policy—and therefore law—must be flexible. The DPA, for instance, can be extended to AI to promote robust research and development and to adapt to AI’s rapid evolution.

Turning to the courtroom, in Baker, Hobart, and my forthcoming guide “AI for Judges,” we seek to give judges a legal reference, outlining appropriate processes to guide their jurisprudence while flagging the questions they will address when AI issues arise in court. This first-of-its-kind work will offer a primer to judges as they attempt to define AI’s legal scaffolding and answer the Centaur’s Dilemma.  

Furthermore, my issue brief—“AI Verification: Mechanisms to Ensure AI Arms Control Compliance”—in turn, recognizes that many have called for AI controls, but no one has explained exactly how that will be achieved. How, for instance, will we verify that a state or an application is complying with the law or ethical principles? Without verification, it is hard to apply law and ethics. The brief attempts to do just that, proposing first-of-their-kind technical mechanisms that can be used to inspect AI “arms” and providing a means whereby regulatory authorities and the international community can be confident that AI regulations are being respected. 

A National Symposium 

In each of these publications, our guiding philosophy has been an emphasis on explaining technology in “plain language.” We believe anyone can understand AI if given the proper guidance, and we aim to make the field accessible to non-technologists, including lawyers. 

This philosophy guided an AI symposium for national security lawyers that SPL hosted in October 2020. Acting as a live AI security policy discussion, we first offered the audience a primer on how AI works. Three live panels followed: AI and the Law of Armed Conflict; AI and National Security Ethics: Bias, Data, and Principles; and AI and National Security Decision-Making. 

Top experts and policymakers fielded audience questions, debated the core policy issues, and introduced the audience to the many challenges and benefits AI will create. The Symposium concluded with a conversation between Baker and CSET Founding Director Jason Matheny (now Deputy Assistant to the President of the United States for National Security and Technology, and Deputy Director of the Office of Science and Technology Policy) about the way AI will transform—or should transform—how and where national security lawyers practice law.  

The bottom line? Twenty-first-century lawyers will need to understand the constellation of technologies known as AI, or they will be left behind. The symposium provided attendees with an overview of the emerging field and broadcasted the importance of AI policy in light of the Centaur’s Dilemma. 

Ultimately, the Centaur’s Dilemma is a “wicked problem” only answerable by a slate of ethically grey solutions. Recognizing this, SPL’s research recognizes there is no single, definitive answer to this problem. In the past year, however, the SPL and CSET collaboration has made strides toward clarifying the legal landscape, crystallizing the process, and deepening understanding. 

AI is here to stay, and it requires serious policy and legal attention. Our hope is that our work will inspire the vigorous thought needed to maximize the benefits of human-machine teaming while mitigating the risks. Visit securitypolicylaw.syr.edu for updates and further reading on AI.


New Frontiers in AI: Policy Briefs and Reports

A DPA for the 21st Century
Read and download at: securitypolicylaw.syr.edu/AI-research.

“A DPA for the 21st Century,” by the Hon. James E. Baker

The Defense Production Act can be an effective tool to bring US industrial might to bear on national security challenges, including those in technology. If updated and used to its full effect, the DPA can encourage the development and governance of AI. 

“Ethics and Artificial Intelligence: A Policymaker’s Introduction,” by the Hon. James E. Baker 

A primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.

“AI Verification: Mechanisms to Ensure AI Arms Control Compliance,” by Matthew Mittelsteadt G’20 

A starting point to explore “AI arms control,” defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.

“National Security Law and the Coming AI Revolution,” by the Hon. James E. Baker, Laurie Hobart G’16, Matt Mittelsteadt G’20, and John Cherry

Observations from the October 2020 AI law and policy symposium hosted by SPL and the Georgetown Center for Security and Emerging Technology.