KIVA - The Ultimate AI SEO Agent by AllAboutAI Try it Today!

PhD Student Expelled for Alleged AI Misuse at University of Minnesota!

  • Editor
  • February 20, 2025
    Updated
phd-student-expelled-for-alleged-ai-misuse-at-university-of-minnesota

Key Takeaways:

  • Haishan Yang is believed to be the first University of Minnesota student expelled over alleged AI use in an exam.
  • The university used ChatGPT-generated comparisons and AI-detection software, despite concerns over their reliability.
  • Yang’s advisor, Professor Bryan Dowd, has defended him, while Yang claims the faculty altered evidence to strengthen their case.
  • Yang lost his student visa and now faces legal and professional setbacks, with his academic career in jeopardy.
  • Yang has filed lawsuits against the university and specific professors, alleging defamation, evidence tampering, and due process violations.

In August 2024, Haishan Yang, a third-year PhD student in health economics at the University of Minnesota, took an eight-hour preliminary exam remotely while traveling in Morocco.

The exam, which students must pass before beginning their dissertation, explicitly allowed books and notes but strictly prohibited artificial intelligence tools.

Yang submitted his answers, believing he had done well. However, weeks later, he was informed that he had failed after faculty reviewers raised concerns about his writing style and content.

They alleged that his responses resembled those generated by ChatGPT and contained concepts not covered in class.

Yang, however, maintains his innocence.

“I did not use ChatGPT on the test,” Yang said in an interview with KARE 11.


The University’s Evidence and AI Detection Methods

The university’s allegations against Yang were based on several key points:

ChatGPT-Generated Comparisons – Faculty members entered the exam questions into ChatGPT and found similarities in wording, structure, and content between the AI’s answers and Yang’s responses.

Use of an Uncommon Acronym – One point of contention was Yang’s use of the acronym “PCO” (Primary Care Organization), which the university claimed was uncommon in the field but appeared in ChatGPT’s generated response.

Yang disputes this, citing numerous academic sources where the term is used.

Previous AI-Use Concerns – Faculty referenced a prior incident in which Yang allegedly used AI for a homework assignment.

In that case, his submission contained a note that read: “re write it, make it more casual, like a foreign student write but no ai.”

While that allegation was dropped at the time, it was brought up during his disciplinary hearing.

AI Detection Software – The university ran Yang’s responses through GPTZero, a widely used AI-detection tool.

However, AI-detection tools have been criticized for their inconsistencies and high error rates.

Experts have raised concerns about the reliability of such software.

OpenAI, the developer of ChatGPT, shut down its own detection tool, stating: “Our AI classifier is no longer available due to its low rate of accuracy.”

Yang argues that these detection tools unfairly target non-native English speakers and that his writing style changes depending on the context of the assignment.


Yang’s Defense: Allegations of Faculty Bias and Evidence Tampering

Yang insists that he did not use AI for the exam and instead believes that some faculty members were biased against him.

His academic advisor, Professor Bryan Dowd, supported him during the university’s review process.

“In over four decades in our Division, I never have seen this level of animosity directed at a student. I have no explanation for that animosity,” Dowd wrote in a letter to the review committee.

Yang also claims that faculty altered ChatGPT’s responses to make them appear more similar to his answers.

He discovered multiple discrepancies between the ChatGPT responses initially shared among faculty members and those later presented as evidence in his disciplinary hearing.

“This is the ethical question: They can keep generating and generating and generating and in some version, ‘Wow, it’s more similar,’” Yang told KARE 11.


The University’s Ruling and Expulsion Decision

A five-member student conduct panel reviewed the evidence and found Yang guilty of academic dishonesty by a preponderance of the evidence standard, meaning they believed it was more likely than not that he used AI.

The panel unanimously voted to expel him.

The ruling not only ended his PhD studies but also resulted in the loss of his student visa, effectively barring him from staying in the United States.

In response to inquiries, the University of Minnesota cited privacy laws, with spokesperson Jake Ricker stating:

“Federal and state privacy laws prevent the University of Minnesota from public comment on individual student disciplinary actions.

As in all student discipline cases, the University carefully followed its policies and procedures, and actions taken in this matter were appropriate.”


Yang’s Legal Battle: Suing for Defamation and Due Process Violations

Following his expulsion, Yang filed two lawsuits:

  • A Federal Lawsuit Against the University – Yang argues that the disciplinary process violated his due process rights and relied on flawed evidence.

He is seeking $575,000 in damages and a reversal of his expulsion.

  • A Defamation Lawsuit Against Professor Hannah Neprash – Yang is suing Neprash, alleging she manipulated ChatGPT-generated answers to make them appear more similar to his responses.

He is seeking $760,000 in damages and a public apology.

Neprash and other faculty members named in the lawsuits have not responded publicly.

The university has stated that it will provide its official response through court filings.


The Bigger Picture: AI and Academic Integrity

Yang’s case is part of a broader trend in higher education as universities grapple with AI’s role in academic work.

The University of Minnesota reported that in the 2023-24 school year, AI-related violations accounted for nearly half of all confirmed academic dishonesty cases on its Twin Cities campus.

Experts have warned that AI detection methods are unreliable and can disproportionately impact certain groups of students.

Reports suggest that these tools have misclassified writing from non-native English speakers, neurodivergent students, and those who use tools like Grammarly or Microsoft Editor.

Stephen Kelly, a project manager for Minnesota State Colleges and Universities, highlighted the challenges universities face:

“It seems like one day when we think we have a good approach to grappling with artificial intelligence, you know, just a week later, something changes, and we’re having to reassess the things that we’re doing.”

Manjeet Rege, director of the Center for Applied Artificial Intelligence at the University of St. Thomas, emphasized the need for universities to rethink their approach to AI in education:

“Rather than relying on AI detection software, educators should focus on designing assignments that are difficult for AI to complete, such as personal reflections, project-based learning, and oral presentations.”

At the University of Minnesota, officials have acknowledged the limitations of AI detection tools. Rachel Croson, the university’s Executive Vice President and Provost, previously stated in a Board of Regents meeting that AI detection should be considered “an imperfect last resort.”

Yang remains outside the U.S., unable to return due to visa restrictions.

He is currently representing himself in court while seeking affordable legal assistance.

The University of Minnesota is expected to file its legal responses in the coming weeks.

Meanwhile, AI’s role in education continues to evolve, leaving institutions to navigate the fine line between upholding academic integrity and adapting to rapidly changing technology.

“The next student could be prosecuted by the same reason. ‘Oh, your answer is so similar to ChatGPT.’ And I think it’s a—we have a deteriorating impact on the learning environment at UMN,” Yang told MPR News.

As AI tools become more prevalent, this case may set a significant precedent for how universities handle AI-related academic misconduct in the future.

For more news and trends, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image
Editor
Articles written12503

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *