How red-light camera laws could help drive federal AI regulation
Law enforcement agencies from the CIA to the IRS to local police departments are beginning to utilize AI technology in different forms, and it’s imperative that Congress come up with a framework that works for Americans everywhere to protect privacy and civil rights.
Some forward-thinking legislators have already begun this positive process by introducing legislation like the ASSESS AI Act, by Sen. Michael Bennet (D-Colo.), and the TAG Act, by Sens. Gary Peters (D-Mich.), James Lankford (R-Okla.) and Mike Braun (R-Ind.). Other lawmakers have unfortunately introduced legislation that would jump the gun by overregulating private-sector AI. Congress needs to focus on limiting the government’s ability to use these tools to infringe on constitutional rights.
Red-light camera case law provides valuable insights into how courts have handled due process concerns in the context of automated enforcement systems. The use of red-light cameras has been and still is controversial. They are particularly prevalent inside the D.C. Beltway, where one wrong turn can result in a costly ding on a driving record and wallet. Proponents argue that they improve road safety by deterrence and that enforcement is civil, not criminal, while opponents claim that they infringe upon civil liberties and violate due process rights.
It is a cornerstone of U.S. constitutional rights that individuals be given notice and an opportunity to be heard before being deprived of life, liberty or property. Red-light camera case law has grappled with this due process issue, with courts considering whether the use of camera photos as evidence violates an individual’s Sixth Amendment right to confront their accusers.
This traditionally involves cross-examining witnesses in court. Critics argue that using automated camera photos as evidence without allowing individuals to cross-examine those who maintain the records and systems associated with the cameras violates this constitutional right.
Similarly, the federal government must carefully consider how AI-generated evidence is collected, stored and presented, to ensure that individuals have a meaningful opportunity to challenge it.
Another fundamental principle of criminal law is the presumption of innocence. The Supreme Court described this as “the undoubted law, axiomatic and elementary.” This requires that individuals be presumed innocent until proven guilty beyond a reasonable doubt. Red-light camera case law intersects this principle by raising questions about who bears the burden of proof when AI-generated complaints are used in law enforcement proceedings.
Is the existence of the photo the accusation that the defendant must rebut, or is innocence presumed unless there is a person who can testify to something they observed? This parallels questions on how fully AI-generated complaints or complaints generated by initial AI analysis should be treated within the framework of the presumption of innocence.
Using camera photos as evidence without allowing individuals to cross-examine those responsible for maintaining the records and systems associated with red-light cameras violates the right to confront one’s accuser. Individuals should have an opportunity to challenge the accuracy and reliability of these automated systems through cross-examination. This becomes an even more important issue when considering the likelihood that machine learning algorithms that are developed under a “black box” methodology that prevents meaningful examination of the underlying processes that derive their conclusions.
Take, for instance, a hypothetical: the Federal Bureau of Investigation creates a black box AI model examining cell phone metadata, online browsing history, and other public information of a group of protestors. It then generates a model to identify similar individuals and applies it to an AI schema that trawls internet traffic to flag potential other domestic risks. Without the ability to examine how this model is flagging individuals for increased surveillance, how can the accused mount an effective defense against a law enforcement complaint?
Absent congressional intervention, this could easily become a future for Americans. As AI plays an increasingly prominent role in generating law enforcement complaints, it is crucial to address how this intersects with an individual’s right to confront their accuser. Balancing technological advancements with fundamental constitutional rights requires careful oversight and thoughtful legislation.
These principles derived from red-light case law should inform the analysis of several federal agencies that already employ AI technologies in various capacities.
The Internal Revenue Service utilizes AI algorithms to detect tax fraud and streamline tax return processing. The Department of Labor employs AI-powered tools for 18 different use cases, including claims analysis and document validation; this could eventually expand to flagging potential risks for OSHA or WHD complaints. The CIA leverages AI for data analysis and intelligence gathering. Again, it’s easy to imagine how this could easily be integrated with PRISM-style programs for unprecedented levels of surveillance.
It’s clear that the federal government’s use of AI is only accelerating and expanding into a wide variety of use cases, without guardrails in place.
Instead of introducing bills that will curb AI innovation, lawmakers should instead be exercising oversight and putting pressure on more transparency regarding current and future uses of AI by federal agencies. With this new insight, policymakers can gain insights into best practices and potential pitfalls when implementing restrictions on federal AI use or developing other relevant legislation.
Congress and the White House should work together to develop a clear framework and guardrails on the development, deployment, and use of AI to inform enforcement activities throughout the federal government. This problem will only continue to grow if we let it happen, given the incentives to increase enforcement levels without needing to increase staffing to match.
Nick Johns is a senior policy and government affairs manager with the National Taxpayers Union, a nonprofit dedicated to advocating for taxpayer interests at all levels of government.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..