The views expressed by contributors are their own and not the view of The Hill

What to do about artificially intelligent government


The White House’s recent efforts to chart a national artificial intelligence (AI) policy are welcome and, frankly, overdue. Funding for AI research and updating agency IT systems is a good start. So is guidance for agencies as they begin to regulate industry use of AI. But there’s a glaring gap: The White House has been silent about the rules that apply when agencies use AI to perform critical governance tasks.

This matters because, of all the ways AI is transforming our world, some of the most worrying come at the intersection of AI and the awesome power of the state. AI drives the facial recognition police use to surveil citizens. It enables the autonomous weapons changing warfare. And it powers the tools judges use to make life-changing bail, sentencing and parole decisions. Concerns about each have fueled debate and, as to facial recognition in particular, new laws banning use.

Sitting just beyond the headlines, however, is a little-known fact: AI use already is pervasive in government. Prohibition for most uses is not an option, or at least not a wise one. Needed instead is a frank conversation about how to give the government the resources it needs to develop high-quality and fairly deployed AI tools and build sensible accountability mechanisms around their use.

We know because we led a team of lawyers and computer scientists at Stanford and New York universities to advise federal agencies on how to develop and oversee their new algorithmic toolkit.

Our research shows that AI use spans government. By our estimates, half of major federal agencies have experimented with AI. Among the 160 AI uses we found, some — such as facial recognition — are fueling public outcries. But many others fly under the radar. The Securities and Exchange Commission (SEC) uses AI to flag insider trading; the Centers for Medicare and Medicaid Services uses it to ferret out health care fraud. The Social Security Administration is piloting AI tools to help decide who gets disability benefits, and the Patent and Trademark Office  to decide who gets patent protection. 

Still other agencies are developing AI tools to communicate with the public, by sifting millions of consumer complaints or using chatbots to field questions from welfare beneficiaries, asylum seekers and taxpayers. 

Our research also highlights AI’s potential to make government work better and at lower cost. AI tools that help administrative judges spot errors in draft decisions can shrink backlogs that leave some veterans waiting years (sometimes, close to a decade) for benefits. AI can help ensure that the decision to launch a potentially ruinous enforcement action does not reflect the mistakes, biases, or whims of human prosecutors. And AI can help make more precise judgments about which drugs threaten public health.

But the picture is not all rosy.

First, the government has a long way to go. Our team’s computer scientists found that few agency AI uses rival the sophistication found in the private sector, making it harder to realize accuracy and efficiency gains. Some may wish to keep agencies low-tech to limit surveillance or otherwise hamstring government. It’s not that simple: Government use of makeshift and insecure AI systems puts everyone at risk. Disabled persons, veterans and all of us deserve better. 

Second, AI poses deep accountability challenges. When public officials make decisions affecting rights, the law generally requires an explanation. This reason-giving requirement is deeply embedded in law — and even enshrined in the Constitution. Yet sophisticated AI tools are opaque; they do not serve up explanations with their outputs. A crucial challenge is how to subject these tools to meaningful accountability and ensure fidelity to longstanding commitments to transparency, reason-giving and non-discrimination.

To address concerns, agencies could be required to politically ventilate AI tools the way they must new regulations. Or they could be made to “benchmark” AI tools, reserving a pool of cases for human decision and comparing results to AI-assisted ones. However, there are no one-size-fits-all solutions. Open-sourcing computer code might make sense when agencies distribute welfare benefits. But disclosing details when tax enforcers use AI to identify cheaters will just aid evasion.

Third, if we want agencies to make responsible use of AI, their capacity must come from within. Our research shows that many of the best-designed AI tools were created by innovative, public-spirited agency technologists — not profit-driven private contractors. The AI tools that help adjudicate disability benefits at the Social Security Administration came from agency insiders with intimate knowledge of governing law and how administrative judges work.

This makes sense. Government work is often complex. Recruiting skilled technologists and updating outmoded computing systems is crucial to building high-quality AI tools and administering them fairly. But it won’t be cheap. 

Last, AI can fuel political anxieties. Government AI use creates a risk of gaming by better-heeled groups with resources and knowhow. The SEC’s algorithmic predictions may fall more heavily on smaller companies that, unlike big Wall Street players, lack a stable of quants who can reverse-engineer the model and keep out of the agency’s cross-hairs. If citizens come to believe AI systems are rigged, political support for a more effective, tech-savvy government will evaporate.

In short, this is a pivotal moment for government. Managed well, agency AI use can make the government more efficient, accurate and fair. Managed poorly, AI can widen the public-private technology gap, make agencies more vulnerable and less transparent, and heighten concerns about government arbitrariness and biases that are coursing through American politics.

Wherever the nation lands on facial recognition, government AI use is here to stay. The question now is which of these two visions becomes reality.

David F. Engstrom and Daniel E. Ho are professors of law at Stanford University. Catherine M. Sharkey is a professor of law at New York University. Mariano-Florentino Cuéllar is a justice on the California Supreme Court and professor of law at Stanford University.

Tags algorithms Artificial intelligence Ethics of artificial intelligence IT systems Technology US Government

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video