The views expressed by contributors are their own and not the view of The Hill

Why it’s time to safeguard against AI liability

There’s been a lot of discussion lately about the creation of an AI bill of rights in anticipation of machines becoming sentient. It is well-intentioned but doesn’t impose any requirements on the creators of artificial intelligence (AI) products that could do serious harm, as expressed in this open letter that calls for a pause in the building of systems such as GPT-4.

For policymakers to truly safeguard against the unique threats of AI, they must enact an AI “Bill of Liabilities” that puts some onus on its creators without stifling innovation.

To start, it is important to recognize the unique aspects of AI that create two new kinds of risk.

First, modern AI systems function more like agents than software tools and can interact directly with the world. But it is clear that no one understands complex AI, like the large language model ChatGPT, which has been trained on gobs of freely available data on the internet. Systems like ChatGPT work well now because of careful reinforcement feedback from their designers that is sensitive to issues such as politics, race, gender and anything else deemed relevant by its creators.  

This feedback suppresses the dark side of such systems that has been picked up in the training data. This is evident in Bing’s recent conversation with the journalist Kevin Roose, where it tries to persuade him to leave his wife despite Kevin telling it repeatedly that he was quite happy in his marriage. It somehow adopted a persona similar to that of Glenn Close in “Fatal Attraction.”

Second, AI has become available to society in the form of “pre-trained” models such as ChatGPT, which can be configured for applications they were not explicitly designed for. In this process, AI has progressed from being an application to what economists would call a “general purpose technology.”

Consider an analogy to electricity or information technology (IT), both of which are general purpose technologies and used for all kinds of things that were not envisioned when they came into existence. The availability of pre-trained models has similarly turned intelligence into a commodity, not unlike electricity or IT.

Together, the nature of AI as an active agent coupled with the ease of configuring downstream applications from standard building blocks makes AI tremendously powerful in terms of capability, but also creates significant risks. The darker uses of the technology that cause concern are things like creating fake or deceptive pitches to vulnerable people, but it isn’t difficult to imagine society-destroying outcomes. Barring consequences, unethical platform operators of AI will exploit it.

The reality is that the genie is out of the AI bottle, and there’s no putting it back. The big question is: Will the market sort it out on its own, or is some regulation necessary?

Unfortunately, history has shown us that markets will not sort this out by themselves if there are no consequences for operators of AI platforms for causing harm. The best option is to create liabilities for AI operators whose systems cause demonstrable harm.

There’s a simple rule from the physical world that seems useful in controlling the risks of AI, namely, a credible threat. If you knowingly release a product that is risky for society, like a toxic effluent into the environment, you are liable for damages. The same must apply to AI. Let the market function, but with the potential for repercussions if a harmful AI product is released without sufficient analysis and oversight. Harmful activity would cover areas such terrorism, money laundering, mental health or manipulation of markets or populations that cause harm. A regulatory body would need to be created to oversee these risk areas.

Additionally, there must be rules around the use of training data for AI. Language models have arisen because there has been no law against the use of copious amounts of freely available data on the internet for training them. A notable property of AI systems is that their intelligence scales with data, where complexity allows them to understand more things.

The previous generation of tech companies got rich using data, much of which was collected through questionable means. The damage was an invasion of privacy, but the benefits were significant. This time around, considering the sensitivity of data we share with agents such as ChatGPT, the risks are much higher, including an invasion into our lives by agents that are capable of unforeseen harm.

If lawmakers do not act now, society could forever be altered.

Vasant Dhar is a professor at the Stern School of Business and the Center for Data Science at New York University. An artificial intelligence researcher and data scientist, he hosts the podcast “Brave New World,” which explores how technology and virtualization in the post-COVID-19-era is transforming humanity. He brought machine learning to Wall Street in the 90s and subsequently founded the machine-learning-based hedge-fund SCT Capital Management.

Tags Artificial Intelligence ChatGPT Kevin Roose Science and technology

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Cybersecurity News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video