AI Chatbots in Policing: Will Automation Hold Up in Court?

Police officers are starting to use AI chatbots to write their crime reports. And most people are wondering, will it hold up in court? The image here on in the Associated Press is a police officer standing in front of a crime report. And you might think that the police officer talks to the chatbot, or that the it does like a very basic preliminary write up and the chatbot handles the rest. No, no, no, no. This is an all in one, all encompassing system from Axon, which you could probably barely see. The company that makes, uh, body cams, Tasers and a variety of other police and security equipment. And this actually feeds into a system that sends the raw video to an AI, and the AI will then analyze the raw video and write the crime report based on the video. So the police officer doesn’t actually have to input anything, he just has to dock his little camera and the camera will use AI to write the crime report. Now the police officer said it was a better report than I would have ever written. And it was 100% accurate. It followed better. It even documented the fact that he, uh, that he didn’t remember hearing another officer’s mention of the color of the car the suspects ran from. And the problem with this is that you’re adding AI to a system that will ultimately determine who gets prosecuted and imprisoned. And at the volume that We deal with this. You can see this isn’t like getting a lot of oversight here. People are just docking their devices and letting them write crime reports. There’s going to be flaws and problems. For example, Microsoft’s co pilot falsely accused a court reporter of crimes that he covered. The what it accused him of is actually stuff that I probably shouldn’t talk about too much here in the video, but basically when he put his own name into Chat, GPT which is Microsoft co pilot, uh, he had done a lot of crime reporting. He had reported on a variety of horrible things that had happened. And because his name was attached to it, the AI confused the reporter for the people being accused of crimes. And when it spit back information, it accused the reporter of a variety of crimes that he had covered instead of committed. And those are the kinds of errors that are possible and likely to happen here with the new AI based body cam. I personally don’t like it. I think it’s actually kind of reasonable to use as a tool to get started to have a human do the report. But at the end of the day, a human being needs to be there and double check all this stuff. Because right now AIs are still kind of hallucinating.