California’s effort to regulate advanced artificial intelligence, encapsulated in SB 1047, has become a focal point of debate between state legislators and Silicon Valley. Initially drafted with stringent measures to prevent AI-related disasters, the bill has been significantly altered following substantial pushback from the tech industry, particularly from AI firm Anthropic and other major players in Silicon Valley. Are they watering down the bill to avoid taking responsibility for AI gone wrong? — Dan TheMoneyMan (@TipsByDan) August 15, 2024 The bill proposed that California’s Attorney General should be able to sue AI companies for negligent safety practices even before any catastrophic event occurred. I really struggle to see how industry could justify opposing the bill now, especially given point2⃣ below. If the risks of their systems are as small as they claim, their chance of getting a penalty is proportionally small! — Michael Cohen (@Michael05156007) August 16, 2024 The amended bill now limits the Attorney General’s powers to seeking injunctive relief—essentially asking a company to stop a potentially dangerous operation—or suing only after a catastrophic event. Another major change is that the bill no longer includes the creation of a new government agency, the Frontier Model Division (FMD), which was initially planned to oversee the implementation of AI safety measures. here’s an improvement: pull the bill. completely unnecessary. — Korben (@korbencopy) August 16, 2024 This board expanded from the originally planned five members to nine will be responsible for setting thresholds for AI models, issuing safety guidelines, and regulating auditors. This shift reflects a compromise aimed at balancing regulatory oversight with industry flexibility. Further softening of the bill includes the removal of the requirement for AI labs to submit safety test results under penalty of perjury. Originally, this measure was intended to ensure that companies adhere strictly to safety protocols. Just drop it dude. You want to do something productive? Legalize nuclear power in CA. Leave our kids and our AI alone! — John Vignocchi (@john_vignocchi) August 16, 2024 Moreover, the language surrounding AI model safety has been relaxed; developers are now required to exercise “reasonable care” to prevent their AI models from posing considerable risks, as opposed to the stricter “reasonable assurance” previously mandated. The bill also introduces protections for open-source developers. Under the new amendments, developers who spend less than $10 million fine-tuning an AI model are not considered liable under SB 1047. Just drop it dude. You want to do something productive? Legalize nuclear power in CA. Leave our kids and our AI alone! — John Vignocchi (@john_vignocchi) August 16, 2024 These amendments come in the wake of massive opposition from industry leaders and several U.S. Congress members representing California. Critics argue that the bill could stifle innovation and disproportionately impact startups and small businesses even in its amended form. State is falling apart. Wiener wants more regulations 🤣 — Regular Ziggy (@RegularZiggy) August 16, 2024 Despite these concerns, SB 1047 has made its way through California’s legislature relatively smoothly, thanks to the Democratic majority. However, its future remains uncertain. Still flawed. Pull the bill. Don’t kill AI way before it starts to fly. — Patrick Breitenbach 🇺🇸 (@pbreit) August 16, 2024 As SB 1047 moves to the California Assembly floor for a final vote, it is clear that the bill’s fate will depend on the delicate balance between advancing AI safety and maintaining the state’s position as a global leader in technology innovation. you show your ugly head again, no AI developer is backing SB 1047 and you’re lying to your teeth.. again — Soheil Yasrebi (@soheil) August 16, 2024 The outcome will be closely watched in California and across the United States, as it could set a precedent for how AI is regulated nationwide. For more news and trends, visit AI News on our website.
The original version of SB 1047 was intended to hold developers of large AI systems accountable, especially in scenarios where these systems might cause widespread harm or cybersecurity incidents with damages exceeding $500 million.
This provision, however, has been removed, following advice from Anthropic and other stakeholders in the artificial intelligence industry.
Instead, the bill now proposes the establishment of a larger advisory body, the Board of Frontier Models, within the existing Government Operations Agency.
The revised bill, however, only requires labs to submit public statements about their safety practices, thus eliminating the threat of criminal liability.
Instead, the responsibility remains with the original, larger developer of the model. This change is seen as a move to protect smaller players in the tech ecosystem from the potential burden of regulatory compliance.
Representatives Ro Khanna and Zoe Lofgren, both Democrats from Silicon Valley, have been vocal about their concerns, warning that the bill might be more focused on addressing hypothetical risks rather than real, immediate issues like misinformation, discrimination, and workforce displacement.
Just after the bill passed the Appropriations Committee, eight California Congress members sent a letter to Governor Gavin Newsom, urging him to veto the bill.
If the Assembly passes the bill, it will return to the Senate for approval due to the latest amendments before landing on Governor Newsom’s desk, where he will have the final say on whether it becomes law.
California Relaxes AI Bill Before Final Vote Following Anthropic’s Advice
Key Takeaways:
They argued that the legislation, even with its amendments, “would not be good for our state, for the start-up community, for scientific development, or even for protection against possible harm associated with AI development.”
Was this article helpful?
YesNo