It is my editorial policy to always follow the Second Law of AI, that AI and material generated with AI tools should always be clearly labeled as such. I ran the first draft of this post, which I wrote after waking up at 1 am, through an AI chatbot to clean it up, but I ended up re-writing so much of it that I don’t think any of its suggestions remain intact.

Police officers have a unique relationship with the public that it is their job to protect, because what they are protecting them from is other members of the public, and they are well-aware of the dangers of their profession.

I’ve been rethinking my “Three Laws of AI-botics.” Asimov’s first law of robotics was the same as my first law, but the more that I come to understand how these neural networks work, the more I come to realize that there is a flaw with telling these things not to harm a human being. They don’t know what harming a human being even means.

Whenever you interact with a chatbot, it’s important to remember that the Algorithm doesn’t know the meaning of anything it says. It’s just really good at guessing which answer the user will be most satisfied with. That’s why these Algorithms tend to be overly sycophantic. (Note to Self: I really really wish they had a way to change that setting. I would probably set my work algorithms to ‘cordial.’ And my home chores ones to ‘insultingly motivating.’ I’m the kind of person who would respond to ‘You’re not meeting your chores goals you lazy butthead,’ with ‘Oh yeah, I’ll show you how clean I can get this floor you greedy waste of electricity!’ I should look into that, I bet someone has a bot that could keep track of our family chore list and effectively get us to stay on top of them.)

Regarding RoboCops, deploying robots with weapons violates the “never harm a human” law, but I can imagine a scenario in which they do more good than harm. These robots could scout dangerous situations without endangering human officers. They might be more effective at delivering sub-lethal munitions in a way that minimizes risk to human life. They don’t have to fear for their own well-being or safety, so they would only be authorized to use force in order to protect the lives of human bystanders or victims.

Don’t get me wrong, I’m not saying this is a great idea. I’m not even suggesting that we should do this. I definitely don’t mean to suggest that nothing could go wrong. I just think the perils of technology is such a common trope that it’s interesting to consider ways that it could go right.

Leave a comment