Complete Story
 
Artificial Intelligence and Assessment: Are Universities Ready to Rethink Integrity?

09/29/2025

Artificial Intelligence and Assessment: Are Universities Ready to Rethink Integrity?

by Chelle Oldham

Image credit: Chelle Oldham

As artificial intelligence continues to reshape higher education, one of the most pressing challenges facing universities is how to respond to student use of AI in assessment. This is particularly urgent in institutions where teaching, learning, and assessment take place entirely online, such as Open University where over 150,000 students study primarily remotely. In such environments, students may have fewer opportunities for in-person support or modelling of good academic practices, making it harder to spot – and even harder to prevent – unauthorised use of tools like ChatGPT or AI-enabled paraphrasing software.

But rather than rushing to tighten penalties and escalate misconduct policies, perhaps we need a more fundamental rethink of our sector-wide approach to academic integrity. What if the issue isn’t simply about catching students who cheat, but about failing to teach students well enough in the first place? (Lofstrom et al. 2015).

A Culture of Punishment or a Culture of Learning?

Many universities still rely on a punitive model of academic integrity. This means students are held to high standards of referencing and originality without always being given the educational tools to meet them. Academic integrity training is often delivered passively through optional online resources or dense library guides. As a result, students who need this support the most are least likely to access it (Sefcik et al., 2020).         

If we know students rarely read the policy pages and are unlikely to browse library referencing guides without prompting, should we be surprised when honest mistakes lead to referrals? Or when generative AI becomes the tempting solution to a complex task that students haven’t been taught how to approach?

Educate, Enable, Expect: A New Model for Integrity

Instead of policing from a distance, we could reframe our institutional approach to academic integrity as a developmental journey—one where students are explicitly educated in the principles of ethical academic work; enabled to practise and refine these skills within safe, formative spaces; and then held to high expectations once the groundwork has been laid.

This 'Educate, Enable, Expect' model builds on existing pedagogic literature that promotes scaffolded learning (Vygotsky, 1978) and self-determined motivation (Ryan & Deci, 2000).
By giving students structured opportunities to practice citation, paraphrasing, and critical use of sources—even experimenting with AI tools in ethical ways—we reduce the fear of “getting it wrong” and make space for authentic learning.

The Higher Education Policy Institute [HEPI] Student Generative AI Survey 2025 reveals a dramatic rise in AI use among UK undergraduates, with 92% now using AI tools in some form—up from 66% in 2024 (Freeman, 2025). While most students report that AI improves their learning, a growing number (88%) use generative AI tools for assessments. However, fewer than half feel supported by their institutions to use AI effectively, and many remain unclear about acceptable usage. The report notes a persistent digital divide, with wealthier, male, and STEM students more likely to use AI confidently. Although institutions have made progress in policy clarity and detection confidence, the report urges a shift from punitive responses towards supportive, skill-building approaches. Students want more education and access to tools, not restrictions.

The report concludes with five recommendations: teaching students how to use AI responsibly, keeping AI policies under regular review reassessing all assessments for AI vulnerability, upskilling staff in AI literacy, actively, and enhancing cross-institutional collaboration on AI strategy (Freeman, 2025).

Learning Through Failure

Failure is often our greatest teacher. If universities truly want students to succeed with integrity, they need to allow room for initial failure—especially in the grey areas where students are still learning what constitutes acceptable academic behaviour.    

One way to do this is by embedding draft submission opportunities through Turnitin, where students can receive feedback and explore their similarity reports before the stakes are high. Another is by offering live academic skills tuition, rather than expecting self-directed reading of static materials. As one student recently commented in a support session: “I didn’t know it was wrong because I’d never been shown how to do it right.”

Rethinking Responsibility

In a post-AI landscape, universities have a choice. They can respond to student misconduct with tighter rules, tougher penalties, and ever-more surveillance—or they can lean into their role as educators. That means acknowledging that ethical learning is not instinctive. It’s taught, modelled, practised, and expected.

Perhaps integrity isn’t just about catching rule-breakers. Perhaps it’s about designing educational environments where students want to do the right thing because they understand what that means—and because they were given the chance to learn how.

Call to Action

As artificial intelligence reshapes the academic landscape, higher education institutions must respond with urgency and purpose. Universities need to embed AI literacy—and its ethical dimensions—directly into the curriculum, starting from the very first module of the first year. Students deserve structured, inclusive opportunities to learn not just how AI works, but how to use it responsibly, with academic integrity. If we want students to make ethical choices, we must teach them what integrity looks like in a digital world—clearly, consistently, and from day one.

 

References

Freeman, J. (2025) Student Generative AI Survey 2025, HEPI Policy Note 61, Higher Education Policy Institute. Student Generative AI Survey 2025 - HEPI

Löfström, E., Trotman, T., Furnari, M. and Shephard, K., 2015. Who teaches academic integrity and how do they teach it?. Higher Education69(3), pp.435-448. https://doi.org/10.1007/s10734-014-9784-3

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78.

Sefcik, L., Striepe, M., & Yorke, J. (2019). Mapping the landscape of academic integrity education programs: What approaches are effective? Assessment & Evaluation in Higher Education, 45(3), 466–483. https://doi.org/10.1080/02602938.2019.1604942

Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press. ISBN: 978-0674576292.

 


Dr Chelle Oldham is the University Academic Integrity Co-Lead at Open University, UK, with responsibility for academic integrity across the institution for all staff and students. Chelle encourages colleagues to embed explicit education and training on academic integrity in their teaching to prepare students for current and future study.

 

The authors' views are their own.

Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!

Printer-Friendly Version

0 Comments