The views expressed by contributors are their own and not the view of The Hill

The false choice holding up congressional action on AI

Earlier this year, the Future of Life Institute released an open letter calling for artificial intelligence (AI) labs to pause the development of “human competitive AI” more powerful than GPT-4 (the system that powers AI chatbot ChatGPT). The Future of Life Institute claimed that such advanced systems posed “profound risks to society and humanity” and that temporarily halting development now to institute guardrails could help prevent a doomsday.

Among the letter’s 33,000 signatories were Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang and some of modern algorithms’ founding fathers such as Stuart Russell and John Hopfield. The idea that AI could pose existential dangers rivaling that of climate change or nuclear warfare was no longer a fringe viewpoint but a movement championed by technology’s greatest Olympians.

This brand of AI safety is rooted in a philosophy called longtermism, which emphasizes ensuring the well-being of countless future generations, even in lieu of the currently living population.

Many experts don’t agree with the longtermist thesis, believing that our regulatory priorities should focus on AI’s current social implications. Eritrean computer scientist Timnit Gebru and others in the AI ethics camp have highlighted the ideology’s origins with Swedish philosopher Nick Bostrom, who was implicated in a racist email monologue that stated “Blacks are more stupid than whites” and included the n-word slur.

Others believe longtermism to be eugenicist in that it dehumanizes underserved groups in the quest for a perceived greater good. Regardless of the intentions or history of this AI safety camp, it is no longer politically or socially voiceless. The AI apocalypse has consumed Silicon Valley culture and become a major topic of interest at elite colleges. Groups like Open Philanthropy are funneling millions of dollars into AI existential risk research, and even siphoning funding opportunities from AI ethics-focused organizations.

As AI continues to exhibit new levels of sophistication, the idea of an algorithmically facilitated catastrophe becomes more valid. But it is also needlessly eclipsing the push led by civil society groups like the Algorithmic Justice League to redress the technology’s existing dangers.

AI has been shown to undermine online privacy, plagiarize human-created work, frequently malfunction, discriminate against minority groups, produce convincing misinformation and disproportionately harm youth. Furthermore, it is often fueled by exploitative labor practices, with overseas human workers forced to spend hours glued to computers, manually completing much of the integral yet severely underpaid behind-the-scenes tasks that ensure large-scale AI applications remain online.

Irresponsible AI is eroding democracy and civilization right now, and Congress must act right now. The European Union is in the final stages of passing comprehensive legislation for artificial intelligence, but in the United States there is hardly a hearing on AI legislation. Bills such as the Algorithmic Accountability Act, which would direct the Federal Trade Commission to require impact assessments of automated decision systems, have yet to be considered.

Sens. Richard Blumenthal (D-Conn.) and Josh Hawley’s (R-Mo.) bipartisan US AI Act Framework Bill is a comprehensive strategy for AI regulation. Senate Majority Leader Chuck Schumer (D-N.Y.) has held many closed-door sessions with lawmakers, but it remains unclear whether there will be any legislation before the end of this session. The reluctance to move forward on legislation is disappointing given the remarkable number of bipartisan AI bills that have been introduced in Congress.

The ongoing power struggle between the two camps in AI has also washed out other pressing concerns that sit in the gray area of ethics and long-term safety, such as AI’s potential weaponization. As Sen. Ed Markey (D-Mass.) pointed out in an op-ed for Scientific American, AI might democratize access to perilous technology. This could enable rogue actors to wreak unprecedented havoc, from designing chemical weapons to potentially igniting pandemics.

Furthermore, the militarization of AI looms as a global concern. A world where nuclear systems are controlled by AI, eliminating the “human in the loop,” raises the specter of unintended escalations, pushing humanity to the brink of conflicts that spiral uncontrollably. Senator Markey has wisely proposed legislation to prevent AI from launching nuclear weapons. That bill also deserves action in the Senate.

The priorities of AI ethics versus risk — the tangible, present challenges of an age of automation and future dangers of uncontrolled AI — are far from diametrically opposed, yet their factions have created a rift that has made progress difficult. The unproductive dialogue between those with the credentials to guide government leaders on their AI strategy has led to inaction.

Congress is being presented with a false choice between what AI timeline to use as a regulatory basis. Leading AI experts, including Yoshua Bengio and Stuart Russell, have increasingly recommended AI strategies that address both present-day harms and long-term risk.

To truly safeguard our future, lawmakers and institutions must be “future-proofed,” developing governance frameworks that address both immediate and long-term challenges. Acknowledging both immediate fairness and long-term existential threats is critical to a successful AI strategy.

President Biden’s recent announcement of the executive order on AI is a step in the right direction. The executive order, combined with Office of Management and Budget Guidance, makes clear the need to establish safety standards for advanced AI systems and also to establish minimum practices for “rights-impacting” and “safety-impacting” systems in the federal government. The Guidance states that federal agencies should be required to shut down these systems if minimum practices are not established.

We encourage interested parties to submit comments on the OMB Guidance and to push for the strongest implementation possible. But neither the executive order nor the OMB Guidance will be enough to rein in the growing power of the advanced AI systems now being developed by Big Tech companies. To establish safeguards for those systems, we need Congress to act.

If we create and promote standards to prevent discrimination, enable auditing, institute value systems, enforce appropriate use and ensure transparency now, AI will be poised to evolve into a transformative digital asset, not a civilization-destroying liability.

Ultimately, the challenge is clear: to navigate this intricate web of AI’s promises and perils. Only by synthesizing the voices of the AI ethics and risk communities, as well as citizen stakeholders in AI issues, can a comprehensive strategy emerge — one that ensures the responsible and humane evolution of artificial intelligence.

Okezue Bell is a Stanford student and artificial intelligence (AI) activist. He is executive director of Fidutam, one of the world’s leading civil society groups mobilizing for responsible technologies. He will be speaking at the Senate AI Issues Forum on AI risk and transparency. Marc Rotenberg is founder and executive director of the Center for AI and Digital Policy.

Tags AI Artificial Intelligence artificial intelligence regulation

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video