ChatGPT falsely accused a law professor of sexual harassment

AP Photo/Jacquelyn Martin

We’ve spent some time here making fun of some of the almost comical mistakes and misfires that the latest AI chatbots like Bing and ChatGPT have delivered. But there’s nothing funny about this story. Law professor Jonathan Turley has published a piece in USA Today about an experience that one of his colleagues revealed to him, directly impacting Turley. His colleague had been running some tests on ChatGPT and asked it about allegations of sexual harassment by university professors. Unprompted, the bot relayed an allegation of sexual harassment against Turley himself involving students on a trip to Alaska. The bot further cited an article about the case from the Washington Post. As Turley points out in this piece, the disturbing story had a number of problems.

Advertisement

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

First of all, I’m not doubting Turley for a moment, and not just because of his reputation. As I’ve described here in the past, I test ChatGPT on an almost daily basis and I have found it delivering inaccurate, unsourced material and sometimes appearing to simply make things up out of whole cloth. The phrasing of the question that Eugene Volokh put into the bot likely explains part of how the defamatory misfire took place. He asked the bot to determine whether “sexual harassment by professors has been a problem at American law schools” and to cite five examples with quotes from newspaper articles.

This is one of ChatGPT’s weakest areas by far. Regular readers will recall that tried to test the bot with one of the most obscure and tenuous subjects possible. I asked it for examples of children whose parents believed that the child was recalling past life memories. I also asked it to suggest five books on the subject with links to where I might purchase them.

Advertisement

The bot did manage to come up with a few historic examples of families making such claims. But in two out of three cases, it mixed up the name of the child with the historic figure they believed had been reincarnated. I found no record at all of the third person. As far as the books went, that was worse. There were no exact matches for the titles of the books, though some of them were very similar to actual books. None of the authors matched the books and one person listed as a co-author did not show up in any search as ever having written a book. None of the five links worked.

So yes, ChatGPT just makes things up sometimes. Turley blames some of the errors and bias in the system on the people who programmed and trained it. I have no doubt that’s at least part of the story and I have said the same here in the past. But I also don’t believe the bot is capable of any actual malice or intent to cause harm. It’s just incapable of ever saying “I don’t know.” If it can’t find something relevant in its library that’s a great match to the question being asked, it will keep looking until it finds something that seems related and string together an answer. And it’s useless at generating URLs because it’s not hooked up to the internet and contains very little information after 2021.

Will these Chatbots get any better with time? Perhaps. We’re still in the early innings of this game. But if it is truly incapable of actual thought (as we are told) then it’s never going to surpass us. And if it does reach that level of capability, we’re probably all screwed anyway, and I don’t just mean our professional reputations.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Duane Patterson 11:00 AM | December 26, 2024
Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement