Technology

California AI bill divides Silicon Valley, draws in national policymakers

A California bill seeking to create new safety rules for artificial intelligence (AI) has opened up a divide in Silicon Valley, drawing top lawmakers into the debate in a rare foray by national policymakers into Golden State politics. 

California Senate Bill 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require powerful AI models to undergo safety testing before they can be released to the public and would hold developers liable for severe harms caused by their models. 

The bill must pass the state Legislature by the end of the week to reach Gov. Gavin Newsom’s (D) desk.

Major technology firms, AI startups and researchers are split over whether the legislation would stifle innovation on the rapidly developing technology, or put in place much-needed guardrails.

Billionaire tech mogul Elon Musk, who owns the AI company xAI, came out in support of S.B. 1047 on Monday, saying it was a “tough call” and acknowledging that his decision might “make some people upset.” 


“All things considered, I think California should probably pass the SB 1047 AI safety bill,” the Tesla and SpaceX CEO wrote on his social platform, X. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

But former Speaker Nancy Pelosi (D-Calif.) and several other California lawmakers have come out against S.B. 1047. In a statement earlier this month, Pelosi said “many” in Congress view the legislation as “well-intentioned but ill informed.” 

“AI springs from California,” the former Speaker said. “We must have legislation that is a model for the nation and the world. We have the opportunity and responsibility to enable small entrepreneurs and academia – not big tech – to dominate.” 

Eight California Democrats — Reps. Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Díaz Barragán and Lou Correa — also sent a letter to Newsom urging him to veto the bill if it reaches his desk. 

“It is somewhat unusual for us, as sitting Members of Congress, to provide views on state legislation,” they wrote. “However, we have serious concerns about SB 1047 … and we felt compelled to make those concerns known to California state policymakers.” 

The lawmakers argued that the methods for understanding and mitigating AI risks are “still in their infancy,” noting that the National Institute of Standards and Technology has yet to issue the guidance companies would be required to follow, and that the bill’s compute thresholds “will almost certainly be obsolete” by the time it goes into effect. 

They also criticized the legislation for focusing too much on “extreme misuse scenarios and hypothetical existential risks” instead of real-world risks and harms, such as disinformation or deepfakes.

“In short, we are very concerned about the effect this legislation could have on the innovation economy of California without any clear benefit for the public or a sound evidentiary basis,” they wrote. “High tech innovation is the economic engine that drives California’s prosperity.” 

California state Sen. Scott Wiener (D), who introduced the legislation, has argued it would only impact the largest AI developers, not small startups, and requires them to perform safety tests they have already promised to complete. 

“While I have enormous respect for the Speaker Emerita, I respectfully and strongly disagree with her statement,” Wiener said in a statement in response to Pelosi’s opposition. 

“When technology companies promise to perform safety testing and then balk at oversight of that safety testing, it makes one think hard about how well self-regulation will work out for humanity,” he added. 

Wiener also emphasized that legislators have amended the bill in response to industry concerns, limiting the California attorney general’s ability to sue developers before actual harm has occurred and creating a threshold for fine-tuned open-source models. 

Under the amendments, open-source models that are fine-tuned at a cost of less than $10 million would not be covered by the legislation — a welcome addition for the open-source community. 

“If you’re a model developer, and you’re working on an open-source project way downstream, you don’t necessarily know how much compute was used to initially train a model, because you might be the third or the fourth person changing that original model,” Joel Burke, a policy analyst at Mozilla, explained to The Hill. 

“That’s part of what open source is, so it’s great that they’re including a floor,” he continued. 

Mozilla, EleutherAI and Hugging Face sent a letter to Wiener in early August, voicing concerns about S.B. 1047’s potential impact on the open-source community. Not all of their concerns with the legislation were addressed in the amendments, Burke noted. 

Anthropic, whose feedback prompted several of the amendments, similarly said Thursday that the latest version of the legislation was “halfway between our suggested version and the original bill.” 

“In our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs,” Anthropic CEO Dario Amodei wrote in a letter to Newsom.  

“However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us,” he added. 

OpenAI, on the other hand, came out against S.B. 1047 last week. The ChatGPT maker joins tech giants Google and Meta in opposing it. 

Jason Kwon, OpenAI’s chief strategy officer, argued in a letter to Wiener that regulation of frontier AI models should come from the federal government and warned the bill could “stifle innovation and harm the U.S. AI ecosystem.” 

S.B. 1047 has also split key figures in the field. Fei-Fei Li, who is known as the “godmother of AI,” argued in a Fortune op-ed earlier this month that the “well-meaning” legislation would have unintended consequences for the entire country. 

“If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and ‘little tech,’” Li wrote.

“The Golden State—as a pioneering entity, and home to our country’s most robust AI ecosystem—is the beating heart of the AI movement; as California goes, so goes the rest of the country,” she later added. 

Yoshua Bengio, who is often referred to as the “godfather of AI,” disagreed in a responding op-ed in Fortune. 

“We simply cannot hope for companies to prioritize safety when the incentives to prioritize profits are so immense,” Bengio said. “Like Dr. Li, I would also prefer to see robust AI safety regulations at the federal level.”  

“But Congress is gridlocked and federal agencies constrained, which makes state action indispensable,” he continued. “In the past, California has led the way on green energy and consumer privacy, and it has a tremendous opportunity to lead again on AI.” 

Fellow “AI godfather” Geoffrey Hinton has also endorsed the legislation, calling it a “very sensible approach” and describing California as “a natural place” to start addressing AI’s risks.

“Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one – including myself – would have predicted how far AI would progress,” he said in a statement, according to Wiener’s office.  

“Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously,” Hinton added.