The views expressed by contributors are their own and not the view of The Hill

To control AI, its creators need to make some big changes

The Washington, D.C., charm offensive of Sam Altman, the head of artificial intelligence developer OpenAI, sure has been impressive. In a matter of months, Altman has met with over 100 members of Congress, treating them to private demos of his generative AI, calmly answering their questions in hearings and briefings, and earning the kind of warm reception the tech industry has not seen in a long time.

All this, despite the fact that Altman has led with the message that his technology will cost people their jobs, yield democracy-straining disinformation, and threaten the very future of humanity.

Regulate us, Altman pleaded. These are problems the government should solve, he testified. Then he got on a plane and returned to California, where nothing had changed in the rapid deployment of tools that even OpenAI admits are vulnerable to adversarial attack and prone to produce plausible-sounding but totally unreliable content.

Surely Altman knows that a fractious Congress is highly unlikely to regulate AI. Even less likely is the world’s governments agreeing on the international regulatory agency he has called for.

After all, Congress has failed for decades to adopt federal privacy legislation, despite industry support for regulation there too. Congress has likewise failed to adopt cybersecurity legislation, or laws regulating facial recognition technology or data brokers or disinformation or any of the other threats posed by the internet and digital technology. There’s no reason why AI should be any different.

Clearly, Altman understands that calling for regulation is an easy way to appease regulators and buy time while OpenAI and Microsoft continue to spread their deeply flawed technology.

Microsoft, which had already incorporated artificial intelligence developed by OpenAI into its Bing search engine, its Edge internet browser and its 365 suite of products, including Outlook and Teams, announced recently that it was incorporating the work of OpenAI into what CEO Satya Nadella called “the biggest canvas of all, Windows.” This despite the fact that OpenAI products continue to generate plausible but false results and create cybersecurity vulnerabilities in user systems.

And even if Congress could get its act together, what would meaningful AI regulation look like? No one knows. All of the proposals that have been put forth are strikingly vague — and would pose no hurdle for OpenAI, Microsoft and others rushing to deploy unreliable products.

For example, Microsoft has argued in its blueprint for AI regulation that a risk management framework recently issued by the National Institute of Standards and Technology “provides a strong foundation that companies and governments alike can immediately put into action to ensure the safer use of artificial intelligence.” Yet what does the NIST framework actually say? Things like: “Responses to the AI risks deemed high priority … are developed, planned, and documented.” Not much guidance there. Moreover, the very next sentence says that “risk response options” can include transferring the risks to others or simply accepting them. Hardly the basis for accountability.

Likewise, the editorial page of the Washington Post argued in May that the first principle of AI regulation should be that “AI systems should be safe and effective.” But OpenAI and Microsoft already claim to adhere to that principle. Writing it into law would change nothing in their practices.

The European Union, a regulatory juggernaut if there ever was one, hasn’t come up with anything more concrete. Its draft AI Act, likely to be adopted later this year, is almost as vague on security as the NIST framework. Its primary mandate is that developers of high-risk AI must implement a risk management plan. But a developer’s regulatory obligations begin with a risk assessment done by the developer itself, followed by the adoption of “suitable” risk management measures. The EU draft offers no further guidance on what those risk management measures might be. On the question of cybersecurity, the draft simply says that high-risk AI systems shall be designed and developed to achieve an “appropriate” level of cybersecurity.

Far more meaningful than calling on a gridlocked Congress would be for OpenAI and Microsoft to set standards for AI and then actually follow them — standards that move beyond high-level principles to concrete reliability and security design elements.

To begin with, AI developers need to implement security practices that have become standard in other areas of software development, including supply chain transparency (through what is known as a “software bill of materials”) and the establishment of rigorous, objective testing and validation procedures.

Second, the industry needs to develop a transparent way to measure reliability and security. The threshold of acceptable risk will vary from context to context, but until some objective metric can be placed on the riskiness of AI, everything will be in the unenforceable realm of “appropriate” and “reasonable.” We have objective standards in other fields, from building codes to airplane autopilots. The AI field needs to catch up.

Third, developers need to stop releasing to the public AI models that pose serious, unmitigated risks. In this regard, the rapid roll-out of insecure AI is no different from the software industry’s long habit of shipping insecure product, only to have to patch it later. The Biden administration’s cybersecurity strategy calls out this ship-and-patch mentality as a major reason why our computer systems are so insecure, and the same holds for AI. AI products are, after all, software.

Some portion of the promised value around generative AI is surely hype. But the security and reliability risks are not hypothetical. With Congress unlikely to act, industry needs to prioritize values over the bottom line.

James Dempsey is senior policy adviser in the Geopolitics, Technology and Governance Program at Stanford’s Cyber Policy Center.

Tags Artificial intelligence Microsoft OpenAI Regulation Sam Altman

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video