Artificial Intelligence has firmly embedded itself into the academic routines of UK university students, with a new report from the Higher Education Policy Institute (HEPI) and Kortext revealing that 92% of students now use AI. This is a 26-percentage-point increase from 2024, demonstrating the technology’s rapid normalization in higher education. More significantly, 88% of students report using AI tools like ChatGPT specifically for assessments, compared to just 53% last year. “Universities should take heed: generative AI is here to stay.” These figures signal a profound shift in how students research, draft, and complete coursework, leaving universities in a race to adapt their policies and assessment methods. With AI’s ability to generate sophisticated responses, concerns over academic dishonesty have intensified. Universities have historically used plagiarism detection tools like Turnitin, but AI-generated content often evades traditional detection methods. “Every assessment must be reviewed in case it can be completed easily using AI.” A critical issue is that one in five students (18%) admitted to incorporating AI-generated text directly into their assignments. While some universities consider AI assistance akin to using a calculator in math, others treat AI-generated content as potential misconduct. “Students who aren’t using generative AI tools are now a tiny minority.” This rapid normalization of AI in academia has left universities struggling to define clear boundaries between responsible AI use and academic dishonesty. Despite AI’s near-universal adoption, students report significant confusion regarding institutional policies. Some universities explicitly forbid AI-generated content, while others encourage responsible AI use but provide limited guidance. “It’s not banned but not advised, it’s academic misconduct if you use it, but lecturers tell us they use it. Very mixed messages.” This policy ambiguity leaves students uncertain about: “To effectively educate the workforce of tomorrow, universities must increasingly equip students to work in a world that will be shaped by AI, and it’s clear progress is being made.” However, some students worry that inconsistent AI policies could lead to unfair accusations of misconduct or give unfair advantages to students who receive better guidance on ethical AI use. While AI is becoming a standard tool for students, not all students have equal access to AI-related resources. “Half of students from the most privileged backgrounds used genAI to summarise articles, compared with 44% from the least privileged backgrounds.” This statistic underscores a growing digital divide—wealthier students are more likely to: Students from lower-income backgrounds may be at a disadvantage due to limited exposure to AI tools, fewer digital resources, and less institutional support. If unaddressed, this gap could widen existing educational inequalities. While AI adoption skyrockets, many universities remain unprepared to handle the shift. The HEPI report found that: “All have codes of conduct that include severe penalties for students found to be submitting work that is not their own and they engage students from day-one on the implications of cheating.” However, experts warn that banning AI outright is impractical, as AI literacy is increasingly becoming a workforce requirement. “I know some students are resistant to AI, and I can understand the ethical concerns, but they’re really putting themselves at quite a competitive disadvantage, both in education, and in showing themselves as ready for future careers.” Despite institutional efforts, students overwhelmingly feel unprepared for AI’s role in academia and beyond. The HEPI report calls for universities to take immediate action in response to AI’s growing influence. Key recommendations include: “Institutions will not solve any of these problems alone and should seek to share best practice with each other. Ultimately, AI tools should be harnessed to advance learning rather than inhibit it.” With AI fundamentally altering the academic experience, workforce expectations, and digital skills required for success, universities face a stark choice—adapt quickly or risk falling behind in an AI-driven future. February 25, 2025: 1,000 Artists Unite for Silent AI Protest Album, Featuring Kate Bush & Damon Albarn! February 24, 2025: UK Police Using AI for Victim Statements—Critics Call It a Dangerous Precedent! February 21, 2025: Fluidstack Seeks $200M to Build France’s Next AI Supercomputer! For more news and trends, visit AI News on our website.Key Takeaways:
Academic Integrity Under Scrutiny
Students Face Contradictory AI Policies
Digital Divide: AI is Creating Unequal Academic Opportunities
How Are Universities Responding?
What Needs to Change?
UK Universities Warned as AI Use Among Students Hits 92%!

The report’s author, Josh Freeman, remarked on the unprecedented scale of this change:
The HEPI report explicitly warns institutions:
Dr. Thomas Lancaster, a computer scientist at Imperial College London, specializing in academic integrity, underscored AI’s irreversible role:
One student voiced frustration over the lack of clarity:
The Universities UK representative acknowledged these challenges, stating:
The report revealed that:
A Universities UK spokesperson reiterated that universities are adapting:
Dr. Lancaster noted that universities that fail to integrate AI into learning will put students at a disadvantage:
Josh Freeman, the report’s author, urged universities to collaborate on best practices rather than act in isolation:
Was this article helpful?
YesNo