The Dual-Edged Sword of Artificial General Intelligence: Risks, Regulations, and Potential Solutions

It seems like right now the race, as you put it, is in the direction of more generality. People wanna make this artificial general intelligence. That’s the stated goal they want to build this super intelligent machine. God, they want to build this Oracle. And it seems like there’s a lot of financial incentives to doing that. I don’t know that we can stop that other than stopping them. And yeah, there’s some arguments in the governance case that we should stop them and that by regulatory or legislative decree, we could, at least in the US and under an international treaty throughout the world, just say you can’t build things with certain levels of compute. We’re not gonna allow you to do that. And oh, by the way, we’re gonna nationalize your industry. And anyone that’s trying to do these things, just more like not everyone can make nuclear energy, to use an example from the field, just not anyone can make advance AI models. We’re just not gonna allow you to do that, right? And it’s something we could do, and I think it needs to be very seriously considered.

In some ways, Chris, I hate to be a bear of bad news, but the cats out of the bag, I’m afraid already. The secret sauce seems to really be lots of data, lots of computation, and then better algorithms. It can’t regulate algorithms in a meaningful way. It’s math. You’re not gonna be able to regulate applied math. And we know how to make these real powerful chips now, and there’s trillions of dollars in these powerful chips. One of the things that Derek and our team is looking at, the AI governance crew and that the US government is talking about is you can put mechanisms on the chips hum possibly to shut them off if they start behaving in contributing to something that’s behaving not well or people misusing them. So build a techno fix right into the chip itself. Correct? Yep. And other people are trying to build something that they’re calling like safeguarded AI. And this is a different structure where you give the AI a very detailed model of the world. And then what’s part of that model of the world is that it knows what’s good and bad, and it refuses to do bad things.

So these like competitive game theoretic dimensions are really leave one worried that absent an AI treaty, an international AI treaty and a stigmatization of creating these models and the way that we’ve done with like cloning and some other scientific developments, there’s no way to stop them. The real problem, too, is that AI also could solve a lot of our problems. That’s right.

Now, you know, not everyone’s bullish on techno solutions and the tech world is over promised things for a long time. But you know, we face climate change as a potential global catastrophe. In some meaningful ways, it’s possible that using these models, we could come up with creative ways to remove carbon and find better alternative energy. And the AI can help us with the research and development of even more impressive tools. It’s possible that we can use AI to improve R&D on synthetic biology such that we could improve longevity and such that we could eradicate disease. And it’s possible that we can use narrow forms of AI to improve precision agriculture and develop nano materials to like radically change the type of world that we live in.

Even as someone who’s very concerned about these risks, I’m also very concerned about pausing and stopping the development because our world in the next 25 to 30 years faces really serious challenges. And there is really good reasons to believe that these technologies could help us overcome those things. And so they really are dual use, and this is a real problem for doing the risk assessment.