Regulate generative AI now or suffer catastrophic consequences later
To say Sam Altman took a bold step forward would be an understatement.
In his recent, pointed testimony to Congress, the CEO of the company that created ChatGPT lobbied the federal government for regulations and tighter restrictions around artificial intelligence systems.
“What we need at this pivotal moment is clear, reasonable policy and sound guardrails,” Altman said. “These guardrails should be matched with meaningful steps by the business community to do their part and achieve the best outcomes for their customers. This should be an issue where Congress and the business community work together to get this right for the American people.”
It is rare that a tech CEO proactively solicits governmental tech regulation. And this is especially rare from a company leader experiencing meteoric growth such as ChatGPT. You would expect someone in Altman’s position to fight limitations and guardrails.
To be clear, this is a watershed moment. Privacy, accuracy, bias and abuse are all potential factors that could lead to catastrophic results, given the current state of generative AI. Altman’s plea recognizes the technology’s tremendous promise while acknowledging flaws that create serious problems for consumers and businesses.
For example, consider the damaging aftermath of a law professor wrongly accused of sexual assault when a ChatGPT search yielded incorrect information about him. The falsely accused professor had no recourse to have such information removed.
Or consider the group of students at Texas A&M who were temporarily denied their diplomas after a professor erroneously concluded that their assignments had been plagiarized after running their work through ChatGPT.
AI bias is also an issue. The algorithm Apple used to determine creditworthiness for its new credit card, Apple Card, gave one male applicant 20 times the credit his wife received, despite his wife having a higher credit score. Apple co-founder Steve Wozniak acknowledged the issue, even admitting that he got 10 times more credit than his wife.
There are many other examples of potentially life-altering outcomes in cases where generative AI gets it wrong. So what should be done, and who should do it?
Altman has the right idea. It should be a group effort involving legislative guardrails and more responsible use of AI by businesses.
For Congress to address harmful flaws in generative AI platforms, it must gather insight from a diverse set of constituents who understand issues such as bias and abuse prevention, data accuracy, data privacy and data quality transparency, just to name a few.
It is also essential that the federal government formulate new regulations with empathy, and consider how damaging the fallout of an AI error can be. They should hear real stories from individuals and businesses who have fallen victim to AI inaccuracies and legislate safeguards to ensure data accuracy, even-handedness, privacy and quality.
There are also important steps businesses using generative AI must take to help make information safer and more accurate. First, consider the “garbage in, garbage out” concept. Companies must take extensive measures to ensure that any information they put into AI platforms is as accurate as possible. They must also collect data with a specific purpose.
To that end, businesses must also take every precaution to ensure that they conduct AI training sessions with data privacy as a top priority. Writer, a popular generative AI platform for enterprises, conducted a study of business leaders about their use of AI. An astounding 46 percent of respondents believed that employees have accidentally shared private information on ChatGPT. So businesses must use data protection methods when conducting AI research.
Finally, the business community must collaborate with regulators to enact effective legislation. Ultimately, these laws could directly affect the latitude with which they use generative AI, so being part of the process is in their best interest. From a more altruistic perspective, they can offer insight that could have a meaningful impact on creating more responsible generative AI for generations to come.
What Altman did took courage. He could have downplayed concerns about ChatGPT and headed back to San Francisco. Instead, he acknowledged the magnitude of impact that generative AI will have on our future — good or bad.
The time for Congress and the business community to work together is now. It is up to us to leverage the momentum that Altman created and help secure the potential of this powerful and helpful technology.
Ameesh Divatia is CEO and co-founder of Baffle.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..