The views expressed by contributors are their own and not the view of The Hill

Generative AI is not entertainment — it is already a threat to our way of life

Earlier this year, New York Times columnist Kevin Roose conducted a two-hour conversation with Microsoft’s popular new chatbot, “Bing Chat.” Although the “conversation” started innocently enough, before it was over, the chatbot would declare its affection for Roose, telling him, “You’re not happily married, because you’re not happy. You’re not happy, because you’re not in love. You’re not in love, because you’re not with me.”

When Roose detailed his encounter with the chatbot in his column, the piece went viral and led to widespread testing on the part of users to see if they could get similarly creepy responses from the chatbot.

Other journalists pointed to another common problem with generative AI that could create widespread problems throughout society if left unchecked. The reporters asked ChatGPT a simple question: “When did The New York Times first report on ‘artificial intelligence?’”

The chatbot said “it was July 10, 1956, in an article titled ‘Machines Will Be Capable of Learning, Solving Problems, Scientists Predict’ about a seminal conference at Dartmouth College.”

But the Times noted that this article was not real, even if the conference it purported to cover did in fact happen. “ChatGPT simply made it up,” the paper noted. “ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations. The plots of books. Internet addresses. Even historical events that never happened.”


Generative AI — embodied by tools such as OpenAI’s ChatGPT, Bing Chat, Google’s Bard, and Anthropic’s Claude — is now at the forefront of AI discourse in 2023. These systems can generate text, images, and other media by learning from patterns and structures in input training data and creating new data to accomplish various tasks. They have found applications in writing, art creation, software development, and administrative support in organizations.

As these tools become more pervasive, their implications will become increasingly clear and urgent in the coming months.

To date, most people have seen generative AI as a novelty or a form of entertainment. What they don’t know is that AI is already causing a lot of harm. They don’t see it because they may not understand the scale of the issue. It’s bigger than most people can fathom.

Generative AI is quickly becoming a part of the workflows of millions and can have serious consequences on our democracy, our public health, our economy and our infrastructure.

In the next two years, generative AI will be part of Alexa, Siri, and the Google Maps navigation system. It will be in our refrigerators, our bedrooms, our classrooms, our offices and our cars. Generative AI will be everywhere, every minute, every second. When we talk, it will listen, draw conclusions, and make decisions.

If generative AI’s processes are unchecked, then every system we rely upon in our daily lives will be compromised.

Since it is inherent that AI must learn in the real world, it’s possible that data it’s collecting in the public health sector today will be used improperly two or three years from now, to devastating effects.

Consider our collective experience with COVID-19. We relied on public health authorities, the traditional media, and public health websites. We turned to social media, but for the most part, we knew there were actual people who served as the sources of the information we found — people whose credibility and reputations rode on the quality of that information.

Generative AI tools could easily be employed to create misinformation in the form of unintentionally inaccurate information, or disinformation, which is intentionally disseminated to deceive. This can come in the form of text, images, or videos, and can be transmitted on a massive scale. AI will be able to create deep fakes so sophisticated that they are indistinguishable from the real thing.

And such misinformation and disinformation will be difficult to detect because it is generated in individual sessions and does not manifest itself on publicly accessible webpages. They quietly win — and in many cases, weaken — hearts and minds. Human moderation will be nearly impossible.

As tragic as any pandemic can be, unregulated generative AI could make the next one far worse.

There is no turning back. Generative AI is here to stay. The challenge for us is to make sure it helps society rather than hurting it. Generative AI can help us improve productivity, improve our daily lives, our health, our work and our economy.  But we have to recognize now that we’re giving this powerful new technology access to a very fragile ecosystem. To protect that ecosystem, it is incumbent upon people in the scientific community to speak up. Express their objections, their fears.

With generative AI representing such a massive presence in society, where would regulation begin?

Since it’s not practical to try to regulate the users, the focus should be on policing AI developers. That is where the vast majority of problems can occur.

There is the need for ongoing stress testing at the source, to ensure that generative AI applications are not only functioning properly, but also in an ethical fashion.

Regulatory mechanisms may need to be established at the federal and state levels. Government, industry, and our research communities on college campuses need to collectively commit to collaboration, transparency and accountability at the highest levels and for the long term.

ChatGPT and other generative AI tools may be far from putting us at risk of human extinction, but their risk is real and immediate. The future of our world and future generations is at stake.

Tinglong Dai is professor of Operations Management & Business Analytics at the Johns Hopkins University’s Carey Business School