The views expressed by contributors are their own and not the view of The Hill

How much restraint is needed with AI?

Artificial intelligence (AI) has become a major disruptor. Our digital society has facilitated its advances, with opportunities to impact every facet of life, including health care, transportation and security. It has also created threats that have prompted some to call for greater restraint on its development and implementation.

The risks have been well defined, including loss of jobs, spreading of misinformation and even the development of highly autonomous weapons. A group of technology leaders has called for a pause in certain levels of AI development (with capabilities up to OpenAI’s GPT-4), claiming that some AI systems may pose “profound risks to society and humanity.” When Geoffrey Hinton left his position at Google because of his concerns about AI, this brought even more attention to these potential risks. 

Chatbots have garnered significant attention, providing human-like interactions that would score well on the Turing Test, the procedure proposed by Alan Turing to assess whether a computer system appears human-like in its communication.  

ChatGPT has been at the forefront of such advances, with concerns raised in several domains, particuarly education. For example, ChatGPT was able to pass a bar exam taken by lawyers and score above the median on the MCAT exam used for admission to medical school.

The risks of chatbots have been well documented. They can generate misinformation through social media and other communication vectors. They can be harnessed during political campaigns to sway voters with propaganda. They can foment social unrest with targeted messaging that can incite angst and even responses that create societal dangers and harm.


But the genie is already out of the bottle. Attempting to pause or restrain such advances is futile. The more salient issue is how we learn to live with AI systems that have not even reached their full level of capability.

Placing restrictions on AI advances in the United States makes no sense. Though corporations and the government are making significant AI investment, other countries, including some that are not friendly with us, are moving full speed ahead. The AI arms race is in full gear, with no well-defined endpoint. If our nation or our allies do not lead AI development in the world, other countries who may use such forces for nefarious purposes could gain an upper hand.

So what can our nation do to simultaneously restrain AI development that can be harmful while encouraging AI development for positive outcomes?

Much like cybercriminal activity, the ideal place is to stop it at its source, which is near impossible. The next best place is educating users so that they do not fall victim to such activities.

AI systems have the potential to act as Trojan horses: once they infiltrate some entity, they can wreak havoc and destruction. Yet stopping AI system development carries with it more risk than allowing it to develop untethered. This is because the people who will respond to such calls are not the people, organizations and entities that need to be stopped. And such bad actors are unlikely to listen to any calls for moderation and restraint.

By advancing and accelerating AI system development, not only will new capabilities be achieved, systems to counter such capabilities will emerge. This will create “checks and balances” that over time will provide the necessary guardrails that any pauses will most certainly not achieve.

Creating AI rules of conduct are appropriate and necessary. Conducting responsible AI system development is worthy of discussion and debate. Progress, not pause, is the path forward to achieve success and find a safe zone for AI systems.  

Sheldon H. Jacobson, Ph.D., is a professor in Computer Science at the University of Illinois Urbana-Champaign. A data scientist and operations researcher, he applies his expertise in data-driven risk-based decision-making to evaluate and inform public policy.