Key Takeaways:
In August 2024, Haishan Yang, a third-year PhD student in health economics at the University of Minnesota, took an eight-hour preliminary exam remotely while traveling in Morocco.
The exam, which students must pass before beginning their dissertation, explicitly allowed books and notes but strictly prohibited artificial intelligence tools.
Yang submitted his answers, believing he had done well. However, weeks later, he was informed that he had failed after faculty reviewers raised concerns about his writing style and content.
They alleged that his responses resembled those generated by ChatGPT and contained concepts not covered in class.
Yang, however, maintains his innocence.
The University’s Evidence and AI Detection Methods
The university’s allegations against Yang were based on several key points:
ChatGPT-Generated Comparisons – Faculty members entered the exam questions into ChatGPT and found similarities in wording, structure, and content between the AI’s answers and Yang’s responses.
Use of an Uncommon Acronym – One point of contention was Yang’s use of the acronym “PCO” (Primary Care Organization), which the university claimed was uncommon in the field but appeared in ChatGPT’s generated response.
Yang disputes this, citing numerous academic sources where the term is used.
Previous AI-Use Concerns – Faculty referenced a prior incident in which Yang allegedly used AI for a homework assignment.
While that allegation was dropped at the time, it was brought up during his disciplinary hearing.
AI Detection Software – The university ran Yang’s responses through GPTZero, a widely used AI-detection tool.
However, AI-detection tools have been criticized for their inconsistencies and high error rates.
Experts have raised concerns about the reliability of such software.
Yang argues that these detection tools unfairly target non-native English speakers and that his writing style changes depending on the context of the assignment.
Yang’s Defense: Allegations of Faculty Bias and Evidence Tampering
Yang insists that he did not use AI for the exam and instead believes that some faculty members were biased against him.
His academic advisor, Professor Bryan Dowd, supported him during the university’s review process.
“In over four decades in our Division, I never have seen this level of animosity directed at a student. I have no explanation for that animosity,” Dowd wrote in a letter to the review committee.
Yang also claims that faculty altered ChatGPT’s responses to make them appear more similar to his answers.
He discovered multiple discrepancies between the ChatGPT responses initially shared among faculty members and those later presented as evidence in his disciplinary hearing.
“This is the ethical question: They can keep generating and generating and generating and in some version, ‘Wow, it’s more similar,’” Yang told KARE 11.
The University’s Ruling and Expulsion Decision
A five-member student conduct panel reviewed the evidence and found Yang guilty of academic dishonesty by a preponderance of the evidence standard, meaning they believed it was more likely than not that he used AI.
The panel unanimously voted to expel him.
The ruling not only ended his PhD studies but also resulted in the loss of his student visa, effectively barring him from staying in the United States.
“Federal and state privacy laws prevent the University of Minnesota from public comment on individual student disciplinary actions. As in all student discipline cases, the University carefully followed its policies and procedures, and actions taken in this matter were appropriate.”
Following his expulsion, Yang filed two lawsuits: A Federal Lawsuit Against the University – Yang argues that the disciplinary process violated his due process rights and relied on flawed evidence. He is seeking $575,000 in damages and a reversal of his expulsion. A Defamation Lawsuit Against Professor Hannah Neprash – Yang is suing Neprash, alleging she manipulated ChatGPT-generated answers to make them appear more similar to his responses. He is seeking $760,000 in damages and a public apology. Neprash and other faculty members named in the lawsuits have not responded publicly. The university has stated that it will provide its official response through court filings.Yang’s Legal Battle: Suing for Defamation and Due Process Violations
The Bigger Picture: AI and Academic Integrity
Yang’s case is part of a broader trend in higher education as universities grapple with AI’s role in academic work.
The University of Minnesota reported that in the 2023-24 school year, AI-related violations accounted for nearly half of all confirmed academic dishonesty cases on its Twin Cities campus.
Experts have warned that AI detection methods are unreliable and can disproportionately impact certain groups of students.
Reports suggest that these tools have misclassified writing from non-native English speakers, neurodivergent students, and those who use tools like Grammarly or Microsoft Editor.
“It seems like one day when we think we have a good approach to grappling with artificial intelligence, you know, just a week later, something changes, and we’re having to reassess the things that we’re doing.”
“Rather than relying on AI detection software, educators should focus on designing assignments that are difficult for AI to complete, such as personal reflections, project-based learning, and oral presentations.”
At the University of Minnesota, officials have acknowledged the limitations of AI detection tools. Rachel Croson, the university’s Executive Vice President and Provost, previously stated in a Board of Regents meeting that AI detection should be considered “an imperfect last resort.”
Yang remains outside the U.S., unable to return due to visa restrictions.
He is currently representing himself in court while seeking affordable legal assistance.
The University of Minnesota is expected to file its legal responses in the coming weeks.
Meanwhile, AI’s role in education continues to evolve, leaving institutions to navigate the fine line between upholding academic integrity and adapting to rapidly changing technology.
“The next student could be prosecuted by the same reason. ‘Oh, your answer is so similar to ChatGPT.’ And I think it’s a—we have a deteriorating impact on the learning environment at UMN,” Yang told MPR News.
As AI tools become more prevalent, this case may set a significant precedent for how universities handle AI-related academic misconduct in the future.
February 20, 2025: How AI is Becoming a Co-Scientist in Groundbreaking Research! February 19, 2025: Apple Event Hints at Budget iPhone—Affordable Model Incoming? February 18, 2025: Meta Struggles to Stop AI Deepfake Celebrity Images on Facebook!
For more news and trends, visit AI News on our website.