The views expressed by contributors are their own and not the view of The Hill

A campaign plan to put AI regulation in the political zeitgeist

It was just one sentence — “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” — but it packed a punch.

The Center for AI Safety released the statement this week, signed by 350 artificial intelligence scientists and tech executives. As far as one-sentence statements go, it said a lot. But unless the center’s goal is solely to be able to say “I told you so” one day, a one-line statement won’t accomplish much. 

Making politicians aware of a societal problem or risk — even an existential one — doesn’t mean they’ll actually do anything about it. For example, we’ve known about the harms posed by climate change for decades but have only recently begun to take meaningful steps to mitigate it. The existential risk posed by nuclear weapons isn’t likely to be solved by the government — they’re the ones with the nukes in the first place. Same with the risk of bio-engineered pandemics. The problem isn’t that our elected leaders don’t know these risks exist. It’s that until they’re properly incentivized, they won’t actually deal with it.

Artificial intelligence is the next evolution of the internet and of technology, but here in the U.S., we haven’t even regulated internet 2.0 yet. There are still no national laws protecting consumer data privacy. Section 230 of the Telecommunications Decency Act, which allows and incentivizes platforms like Facebook, Instagram, YouTube and Twitter to promote toxic content no matter how harmful, is alive and kicking, despite President Biden and former President Trump each calling for its repeal or reform

Politicians rarely do anything unless it’s in their clear political interest to do so. And if the leaders who signed the Center for AI Safety’s letter want action, they need more than a letter. They need a relentless campaign.


Every member of the House, a third of the Senate and the president are all up for reelection next year. They’ll argue over the usual issues — guns, immigration, health care, taxes — but the media gets bored quickly and is always looking for something new and shiny to focus on. Artificial intelligence is the perfect answer. Leaders in the AI space who want to see regulations proposed, supported and enacted need to make that the only politically salient position to take in the 2024 campaigns, on both sides of the aisle, in primaries and general elections.

The good news is, it shouldn’t be that hard. If I were running a campaign like this, I’d start with polling showing that the public overwhelmingly fears the potential risks and ramifications of AI running rampant. The data will almost assuredly show that the public is extremely concerned and they want something done about it. That sets the baseline for the discussion.

Then it needs to become competitive. The Center for AI Safety or some other group should create a political action committee and start running an aggressive campaign. Start with a reasonable, concise, clear policy agenda that candidates running for office at all levels of government can sign onto. Turn the policy agenda into a candidate questionnaire that forces each campaign to actually think about their candidate’s views on AI (or more likely, forces them to develop views on AI). Endorse candidates who take the right position. Issue statements harshly urging voters to reject candidates who don’t. You could even create an AI seal of safety or something tangible that candidates can feature on their websites and social media profiles to show they’re on the right side. 

Then make it even more public. Pester moderators for every debate for just about every office to include an AI question. Stock candidate town halls with people in the audience ready to ask about AI. Plaster every digital platform that will take political ads with petitions asking people to both show their support and demand the same of their leaders. Flood every newspaper and blog with op-eds demanding action on AI safety. Create a podcast to put pressure on candidates to endorse the policy agenda and use some of the big names who signed the letter, like OpenAI CEO Sam Altman, to drive listenership. YouTube channels, grassroots rallies, protests, summits, opposition research — virtually every campaign tactic can be used to pressure candidates to take a stand.

And once they do, and once the election is over, then you need to hold them to it. Draft legislation during the campaign and find lead sponsors now. Get legislation introduced quickly in every relevant chamber as soon as the session begins and vocally demand that everyone who endorsed the AI campaign agenda co-sponsor the bill. Call out those who break their promises — the media loves to beat up on hypocrites and most politicians would rather cut off a finger than get bad press on an issue their base cares about. Apply pressure at every turn and don’t let the insiders dissuade you from going negative simply because they want to protect their own. Assuming the legislation can remain straightforward, simple, easy to implement and clearly nonpartisan and non-ideological, it should pass in cities and states across the country and maybe even make some inroads in D.C. 

The letter was a perfectly nice first step. And it certainly provides cover for its signatories years down the road. But absent a comprehensive, aggressive, persistent effort to force action by elected officials, it also amounts to little more than a well-intentioned, ineffective start. We can do better. 

Bradley Tusk is a venture capitalist, philanthropist, and political strategist who previously served as campaign manager for former New York mayor and Democratic presidential candidate Mike Bloomberg. His first novel, “Obvious In Hindsight,” comes out this fall.