Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back

As artificial intelligence use permeates every area of society, a new consensus is emerging: that everyone else hates it when you cut corners with the technology.  

May 14, 2025 - 20:12
 0
Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back
Many students aren't allowed to use artificial intelligence, and when they catch their teachers doing so, they're often peeved.

Many students aren't allowed to use artificial intelligence to do their assignments — and when they catch their teachers doing so, they're often peeved.

In an interview with the New York Times, one such student — Northeastern's Ella Stapleton — was shocked earlier this year when she began to suspect that her business professor had generated lecture notes with ChatGPT.

When combing through those notes, the newly-matriculated student noticed a ChatGPT search citation, obvious misspellings, and images with extraneous limbs and digits — all hallmarks of AI use.

"He’s telling us not to use it," Stapleton said, "and then he’s using it himself."

Alarmed, the senior brought up the professor's AI use with Northeastern's administration and demanded her tuition back. After a series of meetings that ran all the way up until her graduation earlier this month, the school gave its final verdict: that she would not be getting her $8,000 in tuition back.

Most of the educators the NYT spoke to — who, like Stapleton's, had been caught by students using AI tools like ChatGPT — didn't think it was that big of a deal.

To the mind of Paul Shovlin, an English teacher and AI fellow at Ohio University, there is no "one-size-fits-all" approach to using the burgeoning tech in the classroom. Students making their AI-using professors out to be "some kind of monster," as he put it, is "ridiculous."

That take, which over-inflates the student's concerns to make her sound hystrionic, dismisses another burgeoning consensus: that others view the use of AI at work as lazy and look down upon people who use it.

In a new study from Duke, business researchers found that people both anticipate and experience judgment from their colleagues for using AI at work.

The study involved more than 4,400 people who, through a series of four experiments, indicated ample "evidence of a social evaluation penalty for using AI."

"Our findings reveal a dilemma for people considering adopting AI tools," the researchers wrote. "Although AI can enhance productivity, its use carries social costs."

For Stapleton's professor, Rick Arrowood, the Northeastern lecture notes scandal really drove that point home.

Arrowood told the NYT that he used various AI tools — including ChatGPT, the Perplexity AI search engine, and an AI presentation generator called Gamma — to give his lectures a "fresh look." Though he claimed to have reviewed the outputs, he didn't catch the telltale AI signs that Stapleton saw.

"In hindsight," he told the newspaper, "I wish I would have looked at it more closely."

Arrowood said he's now convinced professors should think harder about using AI and disclose to their students when and how it's used — a new stance indicating that the debacle was, for him, a teachable moment.

"If my experience can be something people can learn from," he told the NYT, "then, OK, that’s my happy spot."

More on AI in school: Teachers Using AI to Grade Their Students' Work Sends a Clear Message: They Don't Matter, and Will Soon Be Obsolete

The post Student Livid After Catching Her Professor Using ChatGPT, Asks For Her Money Back appeared first on Futurism.