Unpacking the Stalinist Perspective on AI Ethics: Embracing Vigilantism as a Solution

So someone asked me the other day at school to explain why there are so many Stalinists in the AI program that we’re in, and what is the core of what they believe? Because some of the claims they make is kind of confusing. What are they really arguing? Oh, yeah, let me. So the context of her asking this question was because I have chosen to embrace the Stalinist aesthetic of that side of the graduate program. And so what it, what is the core of it that they’re arguing for? Because they will often say things like we should just stop. We should stop doing AI. AI is not something we should be doing. And this is an argument that they’re making in a class full of lawyers and engineers from companies like Adobe, who are doing AI in a big way, right? And so AI was invented during the Stalin era of the Soviet Union. What is the argument that stalanists are really making about the ethics of artificial intelligence? If not that we should stop? Is that the way we’re doing it is bad.

And that in general, all of these Liberal attempts at solving problems around AI could never work. If they get everything that they want, it will not solve the problem. So in terms of things like fraud, in terms of things like harmful deep fakes, in terms of things like mass manipulation, through things like tailored deep fakes built for the specific psychological profile of every individual, which every ad network already has, right? That these things are scary and bad and we should try to avoid them. And that in many ways, the more subtle biases and the effects of that are something we should try to avoid. And especially that the prospect of Collie Linux, for example, are meta split combined with a large language model, which by the way already exists, is terrifying in terms of cyber security and the potential for harm.

Okay, so the most sweeping, broad progressive Liberal proposal for these problems is that we need an international framework modeled after the international treaties around nuclear energy. Because the traditional regulatory framework is that when a new industry forms, you have self regulation by industry until it doesn’t work. And then you have regulation of the industry at the national level, and then you get international consensus around how we should be regulating the industry everywhere in the world. That doesn’t work for things that are like an imminent existential threat to humanity, like nuclear weapons and nuclear energy. And so it takes the opposite direction. You have, you start with international consensus that is an implemented by nations and then implemented by industries. And the argument is liberals are making is that’s how we should regulate AI. And the point that I always make is all of the things that you’re afraid of would still happen if that was true. You would, you, today, for example, you have call centers all over the world where all they’re doing all day is trying to defraud people. What are you gonna do about that? You have countries that don’t care, right? And just like cryptocurrency, just like all of these other things, these other use cases that are being regulated away in Liberal democracies, it just moves there and keeps happening, and it probably grows and gets worse as a result of less regulation and scrutiny in those countries, right? So all of these proposals that liberals are coming up with for solving AI problems could never work. If they got everything that they wanted, it wouldn’t work.

And so what’s the stalenist argument? What the stallists are arguing is that vigilantism is a much better solution to those kinds of problems. And I don’t think that I would want to personally participate in that or do that.

I am shocked that it has not already happened. I’m shocked that, for example, people who are creating artificial intelligence that is specifically intended to, for example, select targets for military strikes by certain countries that have more than 90% civilian casualty rates. Who wrote that? Who’s profiting from that? I bet someone knows where they’re.

I’m shocked that there isn’t vigilantism against people like that. And I think the Stalinists would tend to argue that social pressure through direct action, whether or not it’s violent, is a much more effective way of deterring those kinds of behaviors. Then like international frameworks and like memos at the UN could ever be that you need real direct interpersonal pressure and the threat of consequences to disincentivize those kinds of use cases. That’s the only effective strategy to preventing those kinds of outcomes.

Because what you see, for example, on TikTok is exactly the same thing with these other, their corollaries with, for example, call centers, you see people reverse hacking them and destroying their entire business on a livestream for entertainment. That kind of vigilanism is effective in a way that organizations, these three letter organizations that are ostensibly tasked with preventing those kinds of fraud call centers fail to. They just don’t work. That’s not really what they’re there to do. They’re there to justify their budget next year with big cases, they get headlines. They’re not there to solve problems for real people. They’re there to protect capital. They’re there to serve the wealthiest in society. Real solutions to real problems come from social pressure and direct action, and that’s the Stalinist argument.

And while I’m not, I have absolutely no interest in participating in that or being a, being a, a, you know, I s, I watch The Girl With the Dragon Tattoo and I’m like, that’s awesome. I hope people are really doing stuff like that. I have absolutely no interest in doing that, but I 100% understand why. And I think it’s honestly the most effective strategy. And that the strategies that are being, you know, implemented in Liberal democracies could not possibly be effective. And so I hope that there are a lot of these girls with dragon tattoos out there. And I think the Stalinist argument is that is a good thing and that is a more effective way of solving problem like this. And so embracing the aesthetic is me saying I think they’re right, I agree with them. I don’t wanna do that. But I will spend all day arguing with lawyers from Adobe about how they’re the worst people in the world.