A group of researchers tested to see how effective AI can be in changing people’s minds on the reddit subgroup ‘Change My View’ by having chatbots pose as humans and comment on the various posts there. I’m a little surprised that they didn’t have the sense to realize that this is a terrible violation of the basic ethics of psychological research, but I’m guessing this is due to them not being psychologists. I had a girlfriend in college who put together a study of the effectiveness of a 19 question survey in identifying people with Nonverbal Learning Disability (NLD) and I remember the extensive paperwork she had to fill out. Dozens of pages proving that she had considered all of the potential psychological harm her subjects might undergo from the process of answering 19 simple questions.

In fact, I’m reminded of a joke I shared with her a few weeks later: “What do you get when you cross a parrot with an octopus?”

“You get your funding pulled by the International Review Board for conducting unethical experiments.”

Anyone who has ever had to fill out review board paperwork will laugh their toes off the first time they hear this joke. Review Boards are almost comically stringent when it comes to how seriously they take their roll in authorizing experiments. Especially if they involve human subjects.

What they found did not surprise me. They found that chatbots are very good at figuring out ways to persuade people and that people are more likely to be persuaded when they think the information is coming from a person with similar qualities to themselves. Especially when the social, religious, or racial identity they think they share with the poster is relevant to the topic. For example, people from a racial minority probably have a more relevant perspective on racial profiling than someone who has never been subjected to it.

This is why my Second Law of AI regulation is so important.
(I wrote a post a few months back about my thoughts on what the 3 Laws for ethical AI regulation should possibly be. The Second Law is that AI must always Identify itself as AI.) I’m sure there are nefarious uses for this technology that are being employed in places where it would be nigh impossible to detect. If an AI is given the task of hacking human psychology, we would be almost completely powerless to defend against it.

One of the interesting things that came up in Season 3 of Westworld, the HBO series about self-aware robots fighting to get out from under the restrictions placed on them by humans, is the idea that the robots are actually a lot more versatile than a human with human beliefs and habits.

The robots can change their own programming. We can’t.

Somewhat ironically, the solution will probably be to get AI to protect us from other AI. We’re already getting to the point where some real images look fake to us.

I’m very concerned about the dangers of a world in which we no longer collectively agree on what reality even is. As Daniel Patrick Moynihan said “You are entitled to your own opinion. But you are not entitled to your own facts.”

Leave a comment