Law Professor Says ChatGPT Invented Sexual Harassment Charges Against Him

Videos by OutKick

ChatGPT is taking over the world (by which I mean it’s popular, though a more literal interpretation of that phrase could be coming just around the corner). But now it’s raising some eyebrows after a college professor revealed that the AI program had created sexual harassment accusations against him.

Law professor Jonathan Turley wrote an article about the disturbing discovery and laid out some of his findings in an article for USA Today.

A colleague who was doing research using ChatGPT about sexual harassment contacted Turley. The professor — UCLA professor Eugene Volokh — had entered the following query: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

ChatGPT generated a result that cited a 2018 article from The Washington Post that said Turley had been accused of groping law students while on a trip to Alaska.

In his USA Today column, Turley writes that he initially found this news funny. That’s because it was so clearly not true. That Washington Post article about him doesn’t exist. He had never been accused of sexual harassment or even taken students on a trip to Alaska.

It also said he taught at Georgetown University, which he never had.

However, Turley writes that the more he thought about it the more he realized how disturbing this revelation is. Especially given the current political climate.

Jonathan Turley
Law professor Johnathan Turley laid out how ChatGPT cited a nonexistent article from The Washington Post while making false allegations against him. (Photo by Alex Wong/Getty Images)

Turley’s Experience Shows How Scary AI Is Becoming

“Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further,” he writes. “What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.”

Turley wrote that recent research has shown that programs like ChatGPT are just as biased as the people who create them.

Even worse, when The Washington Post looked into the incident, they found that Microsoft’s Bing repeated the story. Bing just so happens to be powered by GPT-4.

I’m not going to pretend to be an AI expert. Actually, I wouldn’t even. have to pretend, I’m not one. Not even close.

Having said that, It’s crazy that we’re headed in this direction. Isn’t it enough that things are already starting to play out the way they do in every movie AI? The program starts making things up and outsmarting humanity.

Yeah, the idea of an AI program that can learn what kind of music you like is cool. That said, it becomes less cool when it becomes HAL 9000 and starts making up Washington Post articles and false claims about you.

Follow on Twitter: @Matt_Reigle

Written by Matt Reigle

Matt is a University of Central Florida graduate and a long-suffering Philadelphia Flyers fan living in Orlando, Florida. He can usually be heard playing guitar, shoe-horning obscure quotes from The Simpsons into conversations, or giving dissertations to captive audiences on why Iron Maiden is the greatest band of all time.

Leave a Reply