Vanderbilt University Apologizes for Using Chat GPT to Send Email on Michigan State Shooting

Vanderbilt University staff apparently used ChatGPT to email students following the recent shooting at Michigan State University.

READ: TRAGIC SHOOTING AT MICHIGAN STATE LEADS TO CANCELLATION OF ATHLETIC EVENTS; SUSPECT DEAD, THREE OTHERS KILLED

The Office of Equity, Diversity and Inclusion at the Peabody College sent an email Thursday night with a confusing, offensive note displayed at the bottom.

After calling for the community to come together, the notation read "Paraphrase from Open AI's ChatGPT AI language model, personal communication, February 15, 2023."

The on campus student newspaper, The Vanderbilt Hustler, was the first to notice the embarrassing reliance on AI.

The Hustler then reported that Associate Dean Nicole Joseph sent another email the next day, saying that using ChatGPT was "poor judgment."

No kidding!

Inexcusable ChatGPT Usage

Camilla P. Benbow, the dean of education and human development at Vanderbilt University’s Peabody College of education and human development, issued a statement saying the email didn't follow "normal processes."

"The development and distribution of the initial email did not follow Peabody’s normal processes providing for multiple layers of review before being sent. The university’s administrators, including myself, were unaware of the email before it was sent," the statement said. 

The apology email also acknowledged that despite adding messages of inclusivity, using an AI program to write a condolences note was inappropriate.

"While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College," the email read.

Obviously, it's incredibly offensive to use an AI program to express sadness at the horrific Michigan State shooting. It's also incredibly illuminating.

AI programs like ChatGPT are already so easily able to replicate messages of "inclusivity" that the program's staff felt it could pass for a human-generated note.

ChatGPT's political bias has already become abundantly obvious.

READ: CHATGPT SHOWS THE FAR LEFT FUTURE OF BIG TECH

But who knew that diversity, equity and inclusion departments would be resorting to it so soon? And for such an unjustifiable purpose?

Vanderbilt deserves the embarrassment and scorn they're receiving for using AI this way. Even so, it doesn't seem like this will be the last time some communications department gets caught.

Written by
Ian Miller is a former award watching high school actor, author, and long suffering Dodgers fan. He spends most of his time golfing, traveling, reading about World War I history, and trying to get the remote back from his dog. Follow him on Twitter @ianmSC