Complete Story
03/30/2026
AI-assisted Grading and Feedback: Why “Approved Tools” Are Not the Whole Compliance Story
By Brooklin Schneider
(Image credit: Author using ChatGPT 5.3)
Lately, I’ve been chatting with colleagues about the ethics of AI-assisted grading and feedback. Questions about using generative AI (GenAI) tools to grade or generate feedback are often motivated by a desire to act ethically and remain compliant with policy. They also reveal uncertainty about copyright and other policy grey zones.
Why AI-assisted Marking Feels Like a Policy Grey Zone
Faculty questions such as “Can I paste a paragraph of student writing into an AI tool to get feedback?” or “Can I use an approved AI tool to grade assignments?” point to a larger issue: most generative AI policies articulate principles of responsible use but provide little contextual guidance for AI-mediated grading and feedback. Instructors are often left to interpret policy on their own.
Research on higher education AI policies helps explain why this occurs. Tsao (2025) notes that many policies foreground principles such as fairness, equity, privacy, and data security but offer limited direction for how those principles should be applied in everyday teaching and assessment practices. Instead, policies function less like rules and more like flexible frameworks that “…encourage[e] pedagogical experiments, but without prescriptive guidelines and clear demarcated or standardised institutional enforcement mechanisms” (p. 2). While this flexibility can support innovation, it also shifts the interpretive burden onto instructors, who must translate broad principles laid out in policies into day-to-day decisions about grading, feedback, and student work.
In assessment contexts, student work is not only evidence of learning but also a creative work protected by copyright law. When policy language does not clearly connect these dots for instructors, they are left to figure out compliance questions on their own: “If the AI tool is approved, and I remove student names, IDs, and other personal information, then it’s fine to upload student assignments for grading, isn’t it?” This reasoning is understandable, but it is incomplete.
What “Approved Tool” Compliance Usually Covers
“Approved” typically signals that an institution has completed some form of Privacy Impact Assessment (PIA). For example, the University of British Columbia’s (n.d.) Teaching with GenAI: A quickstart guide for faculty links the acceptability of using GenAI for summative assessment to institutional review and approval processes, and it explicitly flags privacy, data security, and intellectual property as key considerations. Similarly, the University of Regina’s (2025, September) guidelines on using GenAI to support assessment and feedback frame the document as guidance intended to support academic integrity, transparency, and student privacy.
These guidelines offer a necessary foundation, but they have limits. Approval and a privacy assessment are not the same thing as permission to reproduce and share student work with a third party. Even if a tool is approved, and even if identifiers are removed, the work itself remains protected, and its use still needs a lawful basis and an ethically defensible rationale.
Student Copyright in Canada: Why Anonymising Is Not Enough
Under Canadian copyright law, the author of a work is generally the first owner of the copyright, subject to exceptions such as works created in the course of employment (Copyright Act, 1985). In the higher ed context, most student assignments will be “works” protected by copyright as soon as they are fixed in some form, including drafts (Copyright Act, 1985).
Copyright ownership matters for AI-mediated assessment because uploading (or pasting) student work into a tool is, at minimum, an act of reproduction within a technical system. Canadian copyright law is clear: the copyright owner controls whether and how a work is reproduced or copied (Copyright Act, 1985).
Canadian copyright also includes moral rights, including the right to remain anonymous or use a pseudonym and the right to protect the integrity of the work. Moral rights cannot be assigned, though they can be waived (Copyright Act, 1985). Stripping the student’s name and ID lessens privacy risk, but it doesn’t resolve authorship rights, moral rights, or whether the student has agreed to the specific downstream uses created by an AI-supported workflow.
The most defensible position for assessment contexts is therefore simple: if an instructor wants to upload student work into a GenAI tool, students should be told in advance and given a meaningful choice, including a non-punitive alternative pathway for grading and feedback.
The Turnitin Disputes: A Cautionary Precedent About Consent and Opt-out
Long-running debates about text-matching systems show how quickly assessment technology can collide with student rights and institutional risk. In 2003, McGill University student Jesse Rosenfeld refused to submit assignments to Turnitin, as required by course policy, and submitted paper copies of his work instead. He received a grade of zero in the course as a result (Canadian Association of University Teachers, 2003, November 1). Rosenfeld argued that because Turnitin stores copies of submitted work, the practice infringed copyright. In an analysis of the dispute, Strawczynski (2004) explains that copyright law protects students’ work from submission to third-party platforms without informed consent. The same logic now applies to GenAI tools that process student work.
What Current Guidance Recommends For AI-assisted Grading and Feedback
Guidance on AI-mediated assessment converges around a consistent set of principles: transparency with students, meaningful human oversight, strict limits on automation, and explicit permission before uploading student work. The University of Regina’s (2025, September) assessment guidelines place human judgement and accountability at the centre of grading and emphasize that students should be informed when GenAI tools are used in feedback or assessment. They distinguish between appropriate uses, such as drafting formative comments or identifying patterns across submissions, and inappropriate uses, such as assigning final grades or generating feedback without human review. Similarly, the University of British Columbia’s (n.d.) Quickstart guide warns faculty not to submit student work to GenAI tools without permission and highlights intellectual property alongside privacy and data security as key considerations.
A Consent-first Framework for AI-assisted Grading and Feedback
A workable approach must do two things at once: protect student rights and remain realistic for instructors. A consent-first framework minimises risk by reducing how often student work needs to be uploaded and ensuring students are clearly informed when it is.
- Define AI’s role. AI should not assign final grades or make unreviewed evaluative judgements.
- Choose the least intrusive method. Many uses do not require uploading student work, such as improving the clarity or accessibility of feedback you have written, drafting or refining rubrics aligned to learning outcomes, or summarising class-level patterns using your own anonymized notes rather than student text.
- Obtain consent before uploading student work. Removing names reduces privacy risk but does not resolve copyright or provide permission for third-party processing.
- Communicate transparently. Before the assessment is submitted, tell students when, how, and why GenAI may be used and what safeguards ensure human oversight.
- Provide a genuine opt-out. Students should be able to refuse AI-mediated grading or feedback without penalty. Institutions have long implemented similar alternatives for systems such as Turnitin.
- Maintain visible human oversight. Document your assessment workflow and review AI output before returning feedback to ensure accountability and pedagogical legitimacy.
One practical step is to include a short disclosure in your syllabus. Here’s an example, adapted from the University of Regina’s (2025, September) guidelines:
I may use an institutionally approved generative AI tool to help draft feedback on your work, for example to improve clarity and ensure feedback is aligned to the rubric. I remain responsible for all academic judgement, and I review and revise any AI-assisted feedback before returning it to you. Your grade is determined by human assessment aligned to the course criteria.
If I plan to upload any portion of student submissions into a generative AI tool to facilitate timely, clear feedback, I will tell you in advance what will be uploaded, the purpose, and what safeguards apply. You may choose not to consent. If you opt out, your work will be assessed and you will receive feedback through a fully human workflow, with no penalty or disadvantage.
One final caution: even when a tool states it does not train on prompts or inputs, instructors should still treat anything entered into the system as potentially retained under organisational retention practices. In practice, the safest approach remains simple: upload less, summarise more, and keep the human accountable.
References
Canadian Association of University Teachers. (2003, November 1). McGill student penalized for not using Internet plagiarism service. CAUT Bulletin. https://www.caut.ca/bulletin/mcgill-student-penalized-for-not-using-internet-plagiarism-service/
Government of Canada. (1985). Copyright Act, R.S.C. 1985, c. C-42. https://laws-lois.justice.gc.ca/eng/acts/C-42/
OpenAI. (2026). ChatGPT 5.3 (March 14 version). [Large Language Model]. https://chatgpt.com/
Strawczynski, J. (2004). When students won't Turnitin: An examination of the use of plagiarism prevention services in Canada. Education & Law Journal, 14(2), 167-190.
Tsao, J. (2025). Trajectories of AI policy in higher education: Interpretations, discourses, and enactments of students and teachers. Computers and Education: Artificial Intelligence, 9. 100496. https://doi.org/10.1016/j.caeai.2025.100496
University of British Columbia. (n.d.). Teaching with GenAI: A quickstart guide for faculty https://it.ubc.ca/sites/default/files/GenAi_Quickstart_Teaching_V2.pdf
University of Regina. (2025, September). Using generative AI to support assessment and feedback of student work: Guidelines for University of Regina faculty and instructors. https://ctl.uregina.ca/assets/generativeai-for-assessment-guidelines.pdf
Brooklin Schneider is a faculty member and educational developer at NorQuest College in Edmonton, Alberta, Canada, researcher and graduate student in the University of Calgary’s Generative AI and Educational Innovation program, currently examining the intersections of academic integrity and generative AI in college and polytechnic settings.
Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!

