The views expressed by contributors are their own and not the view of The Hill

Three urgent AI red flags for Congress to address in 2024

At a moment when both the promise and peril of artificial intelligence are greater than ever, elected federal leaders are exploring much-needed steps to regulate the industry. 

Senate Majority Leader Charles Schumer’s (D-N.Y.) ongoing series of AI forums is providing an important public arena for open discussion. President Biden’s recent executive order on AI offers valuable guidance on transparency for private industry to enhance safety and security. 

These actions are positive steps. But they don’t go far enough.

AI systems are high-impact. To be sure, those impacts can be wonderful: AI can save lives by improving diagnostic accuracy in radiology, for example. But unaccountable use of AI can also lead to insidious harms. 

When used in hiring, AI algorithms can systematically deny jobs to older people or those with disabilities, virtually without detection. AI-generated misinformation can destabilize markets or elections. And AI’s carbon footprint from manufacturing, data storage and model training remains largely untracked, obscuring its potentially substantial negative environmental effects.

These and other risks make the need for the United States to enact comprehensive regulation and oversight more urgent than ever. In the coming year, Congress can and should play a central role in ensuring that we realize the benefits of AI, while controlling the risks — to individuals, organizations, society and the environment. 

However, in the short term, there are three areas worthy of prioritization, even if all-inclusive legislation still requires more time:  

  1. Congressional legislation should mandate disclosure about the use of AI, starting with federal government entities. People have a right to know when an algorithm is influencing opportunities related to hiring and employment, education, housing, credit, lending and healthcare. Notification builds public trust and allows for collective risk monitoring. New York City’s recently enacted law requiring employers to disclose their use of AI to screen job candidates is a laudable example, but incomplete. Beyond notifying users, AI vendors should explain the systems’ goals and document their efficacy. Rigorous validation protects individuals from biased and incorrect assessments and prevents organizations from purchasing ineffective tools. Transparency standards must also be meaningful, not obfuscating. When New York City moved the matching system it uses to determine middle school placements online, it disclosed its lottery numbers as lengthy strings of numbers and characters, or hexadecimal numbers, rather than as comprehensible percentiles. Situations like these do little to inform the public. Instead, Congress should set understandable disclosure requirements for state and local governments receiving federal funds.
  2. Congress should mandate a comprehensive inquiry into the environmental impact of AI. The computing power required for AI already produces more carbon emissions than the airline industry, but we do not currently have sufficient information — or even reliable ways to get information — to measure it. With the right data, Congress can financially disincentivize unnecessary or unproductive use of AI by quantifying its end-to-end environmental footprint and factoring that information into the taxation system.
  3. Congress should substantially increase federal investment in responsible AI research, education and training. Such funding would allow researchers across disciplines — computer science, law, social science and ethics — to evaluate bias in AI systems, develop techniques to enhance fairness and establish best practices that withstand the rigors of objective scientific review. Relying solely on self-regulation by tech giants with trillion-dollar valuations and considerable self-interest has proven dangerously naïve. Instead, federal grants through agencies like the National Science Foundation should fund independent academic work. Supporting AI literacy programs would complement such research. It would help build an informed citizenry who could knowledgeably engage in civic discourse and make choices at the ballot box that can shape how AI is deployed in their communities. Where would the money come from? Congress should mandate that commercial entities that capitalize on AI allocate a substantial portion of their profits to responsible AI research and education. The imbalance in financial benefits reaped by commercial entities compared to the societal burden of controlling AI tools is glaring. And while some companies make voluntary commitments to responsible AI and AI safety, the scale of their contribution pales in comparison to their valuations. One promising option is to impose a high-impact AI tax, similar to a carbon tax, and to use the collected funds to support responsible AI research, education and regulation efforts.

In summary, the AI era requires visionary leadership to harness the technology’s benefits while mitigating potential harms. Congress should play a pivotal role in AI’s impact on future generations. If responsible legislation is enacted, we will be able to use AI to make better-informed decisions across critical areas of our daily lives without compromising our ethical foundations. 

Until then, the impact of these systems could lead to catastrophic or irreversible harm for many people. 

Julia Stoyanovich is an associate professor of Computer Science & Engineering and of Data Science, and the director of the Center for Responsible AI at the New York University Tandon School of Engineering. 

Tags AI bias Artificial intelligence artificial intelligence regulation Carbon footprint Chuck Schumer Joe Biden Politics of the United States

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video