Throughout the last week of August, California lawmakers passed several much-praised yet controversial laws to restrict the risks associated with AI, which now await the Governor’s signature.
Two key laws attracting focus are SB 1047 and AB 2839
Senate bill (SB) 1047, aka the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act This would mandate that AI models trained with more than $100 million worth of computing resources follow and disclose a Safety and Security Plan (SSP) to prevent their models from causing “critical harm”. This is defined as mass casualties or material damages exceeding $500 million. From 2026, it would also ensure a third-party auditor performs an independent check of safety and compliance, along with improving protections for whistleblowers.
Assembly Bill (AB) 2839 This “would prohibit a person, committee, or other entity” from distributing a deliberately deceptive and malicious ‘election communication’. It also requires deliberately fake content to be labelled as such.
Senate Bill 942 forces certain providers to make an AI detection tool available to the user at no extra cost.
Opinions on these laws are split, particularly on SB 1047 which will directly impact the companies developing large AI models. Elon Musk is a fan as it limits risk to the public, Open AI, not so much - as it “unfairly burdens” innovation and businesses, especially startups.
Many argue that the regulation has many strengths, namely its adaptability - with Turing award-winner Professor Yoshua Bengio saying that “It is an excellent bill” which the EU should “take inspiration” from - namely as it doesn’t prescribe a specific way to regulate AI, as this is an evolving field.
The California Legislature has also been praised for bypassing a gridlocked national Congress to target the home of US tech giants, Silicon Valley, which will inspire subsequent regulation and enforcement across the globe.
Natalia Fritzen, AI Policy and Compliance Specialist, comments: “The impact of these Bills, if signed, will be huge. They’re aimed at Silicon Valley, the home of global Big Tech, and are truly multi-faceted in their approach. They’ll target the distribution and regulation of deepfakes, both political and pornographic, and ensure platforms provide users the AI tools to help spot synthetic generated content. SB1047 will also reduce the risk of large-scale, structural AI system failures - which are not an immediate risk, but certainly could be in the future.
“Together, these will aid current and future interoperability, helping ensure global compliance with the EU AI Act, the UN’s Information Integrity Principles, the UK’s long-promised AI Bill, and any future global legislation on AI safety and deepfakes. It may even inspire the US to take action federally.
”Questions remain, however. It’s not clear how cohesive this state legislation will be with current or forthcoming federal legislation. Furthermore, the Bills don’t tackle generative AI-led scams and fraud, which are increasingly impacting organisations and individuals - with $12.3 billion in damage done to businesses in 2023 alone.
“Though It’s important to remember that many of these laws are just a starting point, and right now we aren't exactly sure the best way to regulate something so novel. What we do know is that a rapidly developing technology will need well equipped regulators to adapt fast.
“Furthermore, we can be sure that social media platforms must do more to tackle the distribution of misinformation; political, fraudulent, or pornographic. They have an ethical and legal duty to protect their users from manipulation and misinformation. External guardrails which penalise individuals and companies who continue to ignore legal and social responsibility are urgently needed, alongside AI age estimation technology to better protect young, vulnerable users.”