California Legislature Passes Groundbreaking AI Regulation Bill: Impact, Provisions, and Controversies

California legislature just passed the first AI Bill of its kind, it’s waiting one more signature from the governor before it’s enacted. So I’m gonna give you all the facts of what it is, who it impacts, and then I’ll give you my opinion on it after. So first, who does this apply to? So applies to any company or any developer, in their terms, that is spending $100 million or more to train an AI model that’s going to use 10 to 26 floating point operations, or flops during training. Which is a lot of compute. But according to some reports, that’s what GPT4 used during its training runs. So it’s not unreasonable to think that future models are going to use as much, if not more. So this is likely going to impact most of the big companies you could think of. So Google, meta, open AI, and anthropic, but also smaller companies that are spending a lot of money in compute, like Mistral. And the purpose of this Bill is to prevent what they call, quote unquote critical harms to humanity. And so the common example is somebody using an AI model to orchestrate a large scale cyber attack that could shut down critical infrastructure like electricity grids, power plants and so on. And the Bill requires that developers test these models against safety and security protocols, and then also bring in a third party independent auditor to audit those models against those same safety and Security protocols. It also requires developers to create, effectively an emergency stop button to be able to shut down the model at any point in time in case something goes wrong. And who’s gonna govern it? Well, there’s gonna be a new independent board called the board of Frontier Models, who’s gonna govern this entire process and make sure that these developers are in compliance with all these standards and procedures. And if a model is found to be used for something harmful, well, then those developers can be sued. And so the first offense is up to 10 million, and then subsequent offenses are 30 million or more. The Bill is created by Senator Scott Wiener, whose reasoning was, because of past policy failures in social media as well as data privacy, let’s get ahead of AI, which is a powerful piece of technology and regulated from the beginning. I think a lot of people assume that this has a lot of opponents in Silicon Valley, and it definitely does. But there are a surprising number of proponents for it as well, Elon Musk being one of them. There are clauses and provisions for folks who are fine tuning models. So if you’re creating a derivative product from the main model, well, then you still have to buy it. By some safety and standard protocols as well. Okay, so now that we’ve talked about the facts and what the Bill is, let me give you my opinion on this. I think my Problem with this Bill is not necessarily that there’s regulation going on. I don’t think regulation is always a bad thing. My problem is that you can do a lot of the quote unquote harm that you wanted to do today. You don’t need a large language model for that. Like, for example, if you were a bad actor and you wanted to go find the plans to build chemical or biological weapon. All of that literally exists on the internet today. It’s a Google search away. If you think back to the Crowdstrike bug that happened that caused about $5 billion worth of damage, well, that had nothing to do with AI. That was kind of a basic Windows bug that caused a lot of issues. If you go further back and think about all the cyber attacks and the data hacks that have happened, from Wannacry to move it to Yahoo, to Capital One to the Melissa virus back in 1999, none of that had anything to do with AI. Or those hackers didn’t use AI to do this. Like, they exploited kind of basic vulnerabilities in these systems. And these are all carried out by generally really sophisticated teams that didn’t need AI to understand what these security flaws were and how to exploit them. And that’s all to say that if somebody wanted to cause damage, they could cause damage. Because people have caused a lot of damage without using AI models for any sort of help. All that information is out there, whether it’s in honestly, Google or if it’s on the dark web. If you want to do something bad, unfortunately, you can do that, and you don’t need AI to help you do that. So it’s not that I’m necessarily against the Bill, but do I think it’s actually gonna do anything useful? Probably not. At the end of the day, I have no problem with any of these massive companies being regulated to this extent for safety and security, I think. Sure, I can understand the sentiment there. I just wish they did a better job of getting to the heart of what’s actually causing these problems. And it’s not some AI model that’s gonna come out. And it’s not clear if Gavin Newsom is gonna sign this Bill. I assume that he will, but in any case, this won’t come into effect until 2026. So once it finally does, I’ll keep you updated with how things are rolling out and eventually what the impact is.