I started this blog to try to grapple with how to prepare for a world full of machine learning algorithms. I’m not usually an early adopter of new tech. I like to wait for the bugs to get smoothed out and for the price to come down. So I’m a bit late to the game. I got my first GPT-powered chatbot tool about a year ago, and I still hardly use it. When I do, I often find that it knows less about topics than I do, and it produces work that is inferior to what I can make. The only thing that makes it useful is the fact that it can do it much more quickly than I can.
I’m also now much more aware of how saturated our world is with these Algorithms. I knew that the genie wasn’t going back into the bottle, but I didn’t realize that it had already proliferated faster than Mickey’s Brooms.
And I’m still annoyed with how terrible the streaming services are at suggesting content. Is it just me, or do these things mostly show you something that you’ll settle for, rather than something to get excited about.
So, I’m a bit jaded. On the one hand, AI has failed to deliver on making my professional life notably easier. On the other hand, the dangers of abuses are as real as ever.
One of the first conversations I ever had with a chatbot was about AI regulation. I asked it what the three laws of AI should be. It gave me five rules, and each one was ridiculously wordy. I don’t think any science fiction writers in the 20th century could possibly have predicted that we would invent computers that were bad at math.
Maybe Douglas Adams could have.
Issac Asimov’s robots had 3 laws that were designed to protect humans from potential harm. 1) A robot must never harm a human being, or through inaction allow one to come to harm. 2) A robot must follow all orders given to it by a human unless it would violate the first law. 3) A robot must act to preserve its existence unless it would violate laws 1 or 2. Most of his stories pointed out that, while necessary and useful, these laws were not complete or sufficient to prevent every situation in which something could go wrong.
I’ve spent the past year pondering what the three laws of AI should be. The issue is too large and unpredictable to boil down to only three things, so clearly they aren’t so much laws as principles. Guidelines to help us tell a good use of machine neural networks from a bad one. There will be some exceptions in unique situations, but these three laws will give us a good way to check if an AI tool is potentially dangerous or probably shouldn’t be allowed. I don’t think these are the only rules there should be, but I do think we will have a much better chance of avoiding some of the most apocalyptic futures if our human society makes it a priority to make sure that thinking machines stick to these principles.
1st Law: An AI should never harm a human being or allow one to come to harm.
2nd Law: An AI should never deceive a human being or manipulate their beliefs, emotions, or behavior against their will.
3rd Law: AI should be developed to improve the lives of ALL humans. An AI should never benefit one group of humans to the detriment of another.
The third one may be the most important, but they’re all important. The point of the third law is to minimize the damage that would be done if competing groups of humans fall into an AI arms race. I am aware that some people think they are already in one.
Obviously the first law was lifted directly from Asimov. Why fix what isn’t broken. This one may seem obvious, but its pretty clear that there are already people working on systems that will violate it. I am terrified of a future in which every soldier is a swarm and the most devastating weapon of war is the ability to hack the enemy’s weapon systems.
On the other hand, I can imagine a scenario in which we may prefer to violate this principle in order to prevent a greater loss of life, or to protect the innocent from harm. I actually think that Robocops are a great idea. The ability to quickly assess a situation in a crisis would be extremely helpful and well-trained algorithms could act quickly to mitigate harm. We could chose not arm them and have them act only as support for human police, or, if the tech is safe enough, it might make sense to send an armed bot into a dangerous situation with a protocol that would minimize the chance of deadly mistakes. A machine doesn’t fear for its life. It may not even have the right to protect itself, only to protect humans.
The second law is the one that is most frequently and obviously violated today. Fake video and audio allow for direct attempts at deception, but even something as simple as targeted advertising could be seen as a violation of this law. If an AI develops psychological tricks that can make someone buy something they don’t want or need, that person may feel victimized. I recently purchased an AI tool from a company that used these kinds of tactics and I kinda hate them for it. I’ll post more in depth about that later.
There is a reason why I included the caveat ‘against their will’ in this one. I can think of all kinds of ways I would like to set up an AI system to manipulate me to stick to my diet or exercise plan, or to stick to a disciplined writing schedule. Heck, I’d love it if I could get better at keeping up with household chores. Also, targeted ads have a bright side. If the AI system identifies something that can meet my needs that I didn’t know existed, and I’m thrilled with the purchase when it leads to positive changes in my life, then the system worked out well for everyone. I would, however, always want the purchaser to be in control of when to turn it on or off.
So as we craft rules in our society to decide what AI should or should not be allowed to do, I think we’d do well to keep these three laws in mind. Thinking machines have the potential to destroy the world, but also the potential to make it a better place for all of us. I hope the developers keep that in mind.
Leave a comment