The views expressed by contributors are their own and not the view of The Hill

Sam Altman’s saga could serve as a catalyst for global AI regulation

There is much still unknown surrounding the whiplash dismissal and swift reinstatement of Sam Altman as CEO of OpenAI. However, one aspect of the high-stakes drama is clear: Artificial General Intelligence (AGI) is too powerful and influential a tool, and its associated risks are too significant for self-regulation.

OpenAI is arguably the world’s most potent, most indispensable company working on AGI. The internal conflicts and whims of a small group of engineers and executives introduce too much liability for a technology capable of harming humanity. Traditional government regulation is necessary to moderate this groundbreaking technology.

OpenAI’s unique structure, a hybrid of nonprofit and for-profit entities, seems to have contributed to the firing-rehiring saga. In 2015, Altman, along with Elon Musk and Peter Thiel, launched OpenAI as a nonprofit dedicated to researching safe, socially beneficial employment of the technology. However, the computing hardware required to train and evaluate AGI is incredibly costly — beyond the financial reach of a solely nonprofit organization. So OpenAI reorganized in 2019 to a new profit-generating subsidiary. This subsidiary was tethered to the OpenAI foundational mission of the safe and socially beneficial development of AGI. A nonprofit board, which limits external oversight, presided over both the nonprofit and its for-profit counterpart.

Following this reorganization, OpenAI’s focus shifted away from its original altruistic aim and toward a more corporate profit motive, mainly after the company took on Microsoft as a $13 billion investor in 2019 and then introduced ChatGPT three years later.

By 2023, the company represented the two primary urges of AGI research: to develop technology for the good of humanity and to make money. Last month’s crisis at the top of the company reveals that those two objectives require government guardrails, particularly when they collide in the same company.

AGI’s ethical, safety and societal implications are too complex for internal governance. Private companies cannot be trusted to make philanthropic decisions about technology that can inflict significant harm on large populations. AGI demands an external body to enforce accountability and ensure AI development aligns with public safety and ethical standards. ChatGPT in particular has grown so powerful and ubiquitous across so many facets of life that it requires external oversight.

When combined with a network of chatbots, OpenAI’s technology has the potential to generate widespread misinformation, introduce mass confusion or cause harm to targeted groups. Sam Altman himself testified before Congress that AGI can inflict significant harm to the world if left in the hands of nefarious actors.

Large language models also represent a threat beyond their human programmers. In the next five years, AGI may be able to enhance its intelligence, taking cues from human behavior. This AGI might form its own objectives, different from what its human creators intended, leading to a scenario where Large Language Models train other Large Language Models to deceive humans. Such AI might eventually dominate vital resources, wield power and exert influence.

Companies developing this technology, like OpenAI, have their own motives, objectives and internal politics. Corporate shake ups could leave the research of AGI companies exposed to malevolent actors or rushed to market before installation of safety measures.

Earlier this year, President Biden issued an executive order focusing on AI safety. While the order signals a recognition of the need for AI regulation, it offers little substantive policy. Most of the document focuses on directives to compile reports, convene meetings and consider implementing new regulations. AI safety has been a concern for the past seven years. By now, the U.S. government should be further along than simply studying reports and holding meetings.

The dangers imposed by AGI require stringent regulation of AI firms and technologies. The recent events at OpenAI expose the fragility of AI governance, demonstrating the urgency for legislative action. Congress must now translate the executive order into concrete laws that provide a robust framework for AI development, balancing innovation with public interest, safety, and ethical considerations.

Federal AGI legislation has the potential to guide international safety standards. As the United States and China compete for dominance in AI technology, it is crucial for the two countries to establish a safety framework. Implementing comprehensive legislation in the US could mark a significant step towards achieving this goal. Just as we have agreements with China governing conduct and safety at sea and in the air, American legislation could lead to a multinational agreement for research and employment of AI.

The chaos at OpenAI reveals a need for government regulation of AGI. The technology cannot be left to the internal dynamics and profit motives of private companies. AGI requires robust legislative measures that balance innovation and progress with public safety, ethical considerations, and the broader interests of humanity. Such legislation could not only safeguard against the potential harms of AGI but also set a precedent for global AI safety standards.

U.S. Army Colonel (retired) Joe Buccino serves as an AI research analyst with the U.S. Department of Defense Defense Innovation Board and an advisor to the Center for AI Policy. His views do not necessarily reflect those of the U.S. Department of Defense or any other organization.

Tags AGI AI OpenAI Sam Altman Sam Altman

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

MLITE CUSTOM HTML

Main Area Top ↴
Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video