Artificial Intelligence: Separating Fact from Fiction in Military Applications

Are you concerned that the artificial intelligence the Air Force is adding to fighter jets like the F16 could one day go rogue? If so, you’ve probably been misled by a combination of hyperbolic news headlines and cloud goblin creators on apps just like this who ignore reality in favor of the more sensational narrative that will buy them more engagement. So let’s talk about what AI actually is. Because the truth is we have much less to fear when it comes to AI going rogue than we do to fear from AI working exactly is advertised and the hands of those who mean us harm. Now, just a few weeks ago, Air Force Secretary Frank Kendall Road and the US Air Forces X62 Vista, a heavily modified F16D piloted by artificial intelligence and according to Kendall himself, the AI in control of his aircraft was pretty evenly matched with the Air Force test pilot with more than a decade’s experience flying that specific jet. And that is certainly a big deal.

But when we’re talking about artificial intelligence like that or even the artificial intelligence that powers Siri in my phone, it’s important to understand that what we’re actually talking about isn’t the AI you’ve seen in the movies. These days, artificial intelligence is sort of a catch all phrase that we use to encapsulate a whole laundry list of machine learning and automation technologies. And we just tend to call it all AI, in part because, well, it’s just good marketing for the firms developing this tech.

But to help you better understand what I mean, I’m gonna quote Allison Gopnik, a professor of psychology with the university of California and a member of the AI Research group at UC Berkeley. Professor Gopnik writes, we call it artificial intelligence, but a better name might be extracting statistical patterns from large datasets. The computational capacities of current AI, like large language models, don’t make it any more likely that they’re sentient than rocks or other machines are. And this really hits the nail on the head. We’ve been culturally conditioned to believe that artificial intelligence means sentient thinking machines, thanks to decades of depictions of AI in movies and fictional writing. But to compare the AI we have today to the sentient AI we see in movies is a lot like comparing the radiation produced by your microwave to the radiation produced by a nuclear detonation. Sure, they do have some things in common, but one is clearly a bit more concerning than the other.

AI engineers and researchers generally categorize AI by either type or functionality. The three types of AI are narrow or weak AI, general or strong AI, and then Super AI is the AI we’ve all Learned to fear in movies. It’s Skynet, a thinking machine so complex and intelligent it can conjure solutions to problems that human beings may not even be able to grasp. Of course, Super AI is strictly theoretical at this point. And then we have general or strong AI, which describes systems that are capable of learning new skills and applying theoretical knowledge to novel practical applications. These are systems that can emulate the way human minds think, and they are also purely theoretical at this point. In fact, the Congressional Research Service believes we are still decades away from reaching general AI. And finally, we have narrow or weak AI, which is the form of AI. Literally all artificial intelligence systems in use today falls within. Now, IBM describes narrow or weak AI as systems that are trained to perform a single or narrow task, often far faster or better than a human mind can. But it still can’t perform outside of that defined task. Instead, it targets a single subset of cognitive abilities and advances within that spectrum. Every single AI system, app, algorithm, agent, model, whatever you wanna call it, in use today or in development today, all fall within the confines of narrow AI. From Siri in my phone to the AI agents flying F sixteens, these systems are designed and trained to perform single or very narrow sets of tasks within pre established circumstances using only the data that they’re provided. They’re not capable of abstract thought. They’re not capable of sentience, they’re not capable of much other than taking input and providing output, just like an incredibly complex calculator.

And before you say, well, what about people like that Google engineer who thought Google’s Lambda, large language model had achieved sentience? Doesn’t he know more than you? To that, I would recommend looking into Eliza, the world’s first ever chatbot devised all the way back in the early 1960s by MIT’s Joseph Weizenbaum. Now, Weizenbaum coded Eliza to be a very simple, automated psychotherapist. Basically just asking how you feel and then repeating your statements back to you in the form of a question to get you to elaborate. That was really all Eliza could do.

But almost immediately, Weizenbaum recognized that people were very quick to personify Eliza, suggesting that this very simple chatbot had actually achieved sentience. I’ll quote him from a paper he published in 1976 entitled Computer Power and Human Reason. I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines. What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. Put simply, people wanted to see sentiments and Eliza, so they found it.

And the same can be said today with far more complex systems like Lambda AI, as it exists today, isn’t capable of going rogue and won’t be for some time to come. But that doesn’t mean AI doesn’t warrant concern. The AI systems we do need to worry about aren’t the ones that go haywire, they’re the ones that work exactly as advertised and are powering adversary weapons of war. Because AI can be a very effective means of increasing both combat capability and capacity. And that is something that people like me do lose sleep over.