February 2024

On February 23rd, the International Center for Academic Integrity and the National College Testing Association co-hosted me in delivering a webinar on the critical roles that testing and academic integrity professionals will play in the future of assessment in the GenAI era. In the webinar, I suggested that higher education institutions leverage this opportunity of GenAI to rethink both what teaching, learning and assessment look like, and the role that faculty versus professionals play in ensuring integrity in those processes.

There were a lot of questions raised during the webinar that I did not have time to address. So, I am grouping them here according to theme and then answering them in hopes of furthering the conversation beyond just those who attended the webinar live.

Ethics of GenAI Use
Q: When uploading work to AI for checking original work or helping to phrase things better, my understanding is that it then becomes part of the algorithm at that point. Has the person lost control of their original work at that point?
A: I guess it depends on your definition of “control”. I think control of original work is a concept that probably needs to be rethought. But, yes, my understanding is that in some tools you don’t have the option of choosing that your work stays out of the training data set. However, it’s not like giving your work to a human who could then turn your work in as their own. GenAI doesn’t work like that in that it doesn’t reuse your work in its complete form. Think about it this way – it breaks your work down into “more examples of words that can be predicted to follow other words that I will learn from”.
Q: Thinking about your point regarding the moral obligation of instructors, do you have thoughts on faculty using AI generated assignments, etc.?
A: I think faculty should be using GenAI to help them generate assignment ideas, come up with lesson plans, generate assessment questions etc. I don’t buy the argument that if we’re not allowing students to use GenAI, then faculty shouldn’t be allowed to use it either. Or that if faculty are using it, students should be allowed to. The purposes are different. The purpose of the instructor is to facilitate and assess learning and they should use whatever tools they have available to help them do that. (NOTE: I think this is better than the faculty just using assessment questions out of the instructor’s manual of the text book, or reusing the same questions over and over again. At least the instructor is involved and, ideally, critiquing the suggestions by GenAI.) The student’s role is to learn and demonstrate their learning; if the student hands in something generated by GenAI, then they’re not doing that. And it’s not just their purpose or roles that are different; students are novices in the discipline and faculty are experts. Thus, faculty can critique the Chatbot output more robustly and critically than can students; we just can’t assume that novice and experts can interface with these tools in the same way. Regardless, though, both faculty and students, can be transparent about their processes and their use of these tools.

The Future Role of Testing Centers
Q: What precedent is there for testing centers within California that broadly serve students in all three systems of higher education, and what are your thoughts on staffing centers with educational specialists who partner with instructors across the systems on the design of their assessments?
A: There is no precedent for that yet. That is something I’m working on. But yes, I would love testing centers to transform into assessment centers that have educational specialists and maybe even psychometricians on staff to help instructors create valid assessments of learning.
Q: What do you recommend for testing centers to upskill to become GenAI assessment specialists?
A: I think current testing center professionals can carve out time to learn how to use GenAI to do simple things like generate multiple exam questions from one set of questions; upskill questions from low taxonomy levels to high levels; revamp assessment questions to make them clearer. Also, I think professionals can learn how to use existing master-based testing platforms like PrairieLearn - to help faculty design their assessments.

Best Assessment & Integrity Practices
Q: I'm very curious about your advice to large institutions (in the U.S.), where we are teaching 150+ students, online. Ideas for navigating challenges with AI, outside of the typical best practices for assessment?
A: I think that online classes have a particular challenge that is not easy to overcome given the strenuous objections to remote proctoring services. However, generally the advice is the same for online and in-person classes: decide which assessments are for learning and which are of learning, and then secure the assessments that are OF learning. Security would either come in the form of in-person testing at a computer-based testing center or online proctoring. And, of course, instructors should do all of the other pedagogical, assessment, and class design techniques that should enhance intrinsic motivation, increase self-efficacy, and reduce barriers to learning and honest demonstration of that learning (this is what my book with David Rettinger will be about – hopefully released early 2025 by University of Oklahoma Press).
Q: What are your thoughts on balancing the monitoring testing centers do against student privacy concerns/student sensitive data protection? I’m thinking specifically about student fingerprints or facial ID as a means of identity confirmation before an assessment is given?
A: The privacy conversation is interesting to me. How many of us already give our facial ID over to our smart phones or the government, yet balk at doing it for school? Of course we should be sensitive to the data we have and protect that. However, we also should consider the tools we have at our disposal to make sure that we’re certifying the person who demonstrated their knowledge and abilities. We owe it to society to not give our degrees away without this certification. While it’s not always easy to resolve tensions between two values that are both “right” or “good” (e.g., privacy and integrity), I think it’s possible if we engage in thoughtful discussions rather than resort to heated and divided rhetoric.
Q: I have used UDL for years, but find that it's flexibility works well if you are in the "light it up" world, but not so much in the "lock it down". Many of the best practices seem to be a step backwards (e.g. oral exams). Curious therefore what you meant when you were emphasizing UDL.
A: I’m curious about your statement that oral exams are a step backwards! I see it as a step forward to remembering that we should be graduating people who have the skills to write AND speak (in some format) about their knowledge. So, I see oral assessment as a critical feature that was eliminated because it didn’t scale with the industrialization of higher education, not because it wasn’t good for learning or evaluating. There are a lot of innovative things happening with oral assessments right now – see this article 'Oral exams improve engineering student performance, motivation' for one example. I am hoping that GenAI tools will help offload background or administrative tasks for faculty and teaching assistants so that they will have time to engage in this way with their students. By emphasizing UDL, I mean that testing centers should think beyond the traditional notion of testing, which gets tagged as not UDL-friendly. We know that frequent testing and mastery-based testing can improve learning, so how can testing centers evolve to facilitate that while being UDL. Computers and GenAI are key to this. For example, students can choose when and what time to test. Students can test over and over again until they master. With computers, we can offer more options for assessment (one student could write, but another student could speak their answers). We can reduce logistical rigor while focusing on intellectual rigor. These are my musings at the moment (with the caveat that I am not a UDL expert).
Q: I currently work in a Writing Center, and in having conversations with our tutors, there is a level of resistance around engaging with GenAI tools. They want to have peer-to-peer conversations and feel that these tools will hinder the rapport building they thrive from. What would your response to this be? Should we be “forcing” engagement with these tools given that they are not going away? How do we strike a balance between engagement with these tools without losing what comes from human-human connection?
A: Such a great question! I think writing tutors should definitely be engaged with these tools and learning how to use them because their tutees are using them. They should learn how to identify when a writer might be over relying on a tool so they can engage with the in a conversation of what is being lost in that process and to hopefully convince them to do more of the writing themselves. I guess I would step back and ask “what is the purpose of our tutoring sessions” or “what is the learning goal?” and then work from there. Once we’ve defined those, we can determine if engaging with these tools would undermine or amplify those goals/purposes, and how. Then, perhaps we might even decide to structure or tutoring sessions differently that could be better for all involved. But, I’m not a writing tutor expert so I could be way off-base here. 😊
Q: Would you include the ability to let students share their writing process (like revision history) with the likes of oral exams and presentations as a way to further invigilate mastery and summative assessments?
A: Yes and no. Anything "remotely" proctored, which would include google doc version history and virtual presentations, is unsecure. A student could fake that version history (e.g., retype Chatgpt output instead of copying and pasting; have another person write the assignment who is logged in as them). And, of course, a student could fake a virtual presentation with the AI that now exists. Having said that, I would put those two examples in the bucket of attempting to assess process over product. Still not in the same invigilated bucket as proctored assessments, but they do raise the barrier for cheating and therefore make it less likely than in assessments that are external and completely unmonitored (e.g., homework).

If you would like to see the slides from the webinar, follow the links on the ICAI webinar page. You can access the recording on YouTube

Claudine Gay’s recent resignation as Harvard’s president has shed light on the fundamental problem with many institutions’ plagiarism policies and perceptions—they focus more on cheating than on learning. This underscores the need to revise how we view and respond to plagiarism by prioritizing learning, attending to the complexities of rhetorical expectations, and making room for the writing process.

The complaints about Gay’s writing identify instances of plagiarism—particularly passages that don’t meet Harvard’s expectations about quoting or paraphrasing. If Gay had turned in this work as a Harvard student, she would have been subjected to Harvard’s policy that students who submit work “without clear attribution to its sources will be subject to disciplinary action, up to and including requirement to withdraw from the College.” These consequences are consistent with other policies at R1 institutions across the country.

The independent reviewers tasked with investigating the first plagiarism allegations publicized in October didn’t do Gay any favors by identifying these as “a few instances of inadequate citation” instead of acknowledging them as plagiarism according to their institution’s definition of the term. This treatment led an undergraduate who sits on Harvard’s Honor Council to clarify, “There is one standard for me and my peers and another, much lower standard for our University’s president.”

The problem of this double standard lies not in the apparent leniency Harvard showed its president but in the comparative severity with which institutions respond to similar infractions by students. This points to a wider issue with the way plagiarism is framed, defined, and responded to across institutions of higher education. With its etymological connection to the Latin plagiarius—“kidnapper”—plagiarism has consistently been equated with criminality and is often presented as a “problem” to be fixed or punished. This allows campuses to control the writing process by scrutinizing the product. Yet, doing so obscures the goal of writing in the academic context: acquiring and dispersing knowledge. In classrooms, writing’s exigence lies in the learning process and the transferable knowledge and skills that are developed through writing.

Certainly, there are some forms of misconduct that go beyond limited rhetorical awareness (e.g. purchasing or selling an essay, claiming others’ ideas as one’s own). These instances unjustifiably curtail the intended learning process. However, the issue of cheating should be a separate conversation—one that raises investigable questions of motivation and purpose.

As writing center directors at a liberal arts college in central New York, our pedagogy is grounded in helping writers learn through the writing process and navigate the challenges of academic expectations. Higher education institutions need to reframe plagiarism to provide room to focus on rhetoric and learning. As writing scholars Linda Adler-Kassner, Chris Anson, and Rebecca Moore Howard have asserted, “All writers are always in a developmental trajectory.” They need to be provided with productive contexts where they can learn how to work through various rhetorical situations without fear of retribution.

Our call to separate the concept of cheating from plagiarism and to reframe the (mis)management of sources in terms of learnable, generic practices isn’t anything new. However, the circumstances surrounding Claudine Gay bring a new urgency to respond to these concerns. We need learning-centered plagiarism policies, guidelines that address the problem of cheating but separate it from the intertwining realities of originality, compositional labor, rhetorical flexibility, and disciplinary expectations. We need to move away from legalistic positions about plagiarism that seek to assume criminal intent or rationalize ignorance. As learners, writers need to be given opportunities to re-try and revise. Claudine Gay has acknowledged that this is what she has started doing— requesting that the publications allow her to make corrections in her articles. Instead of rejecting writers accused of source mismanagement, institutions should implement policies that invite them to learn, grow, and keep writing.


Attending international academic events nourishes students, especially in our disciplines and helps us to broaden our perspectives and networks. However, when we get the chance to take an active part in such events and have the chance to be heard, it not only boosts our confidence but also our motivation. That was one of the reasons why so many of us were eager to take part in the International Day of Action for Academic Integrity (IDoA). Our IDoA Student Working Group came together to discuss and generate a mindmap and an infographic about the ethical use of Artificial Intelligence (AI).

Another reason for our enthusiasm was involvement in the International Center for Academic Integrity (ICAI) community which brought us together. We are in different disciplines, degree programs, countries, and even different time zones. Nevertheless, academic integrity as an interdisciplinary issue allowed us to hear voices from other students all over the world. We realized that we have similar concerns, especially about the ethical use of AI, which is a rapidly changing technology that can appeal to students as a potential new tool for use in academics. The blurred line of engaging in AI with the act of academic integrity was the main concern that we focused on and shared our own experiences.

Normally we have structured frameworks in academia, however, the unpredictable nature of AI requires different institutions to take different approaches towards it. While some are working to integrate AI into their curriculum/teaching approach, some others have already been working on AI policies and guidelines. We also realized that different disciplines require different points of view, as the flow of the classes are different from each other. For those of us in the Health Sciences, AI may pose the risk of dangerous misinformation but can be used to help analyse data more effectively. On the other hand, students may not be allowed to engage with AI tools while writing essays in a Foreign Languages department, while in statistics classes students are sometimes encouraged to use AI-generated datasets for practice. That’s why it was so satisfying for us to come together and share our unique perspectives.

Finally, we all reflected that although it is difficult to draw lines and set rules for the use of AI by institutions for now, individuals already know and feel the right thing to be done. Most students take pride in creating their own work, following proper citation guidelines, and acknowledging where their sources and inspiration came from. Issues regarding academic integrity usually arise from a lack of knowledge rather than indifference or poor intent; the use of AI only further confuses students who lack guidelines or understanding on when and where it can be used. What works for all of us is practically searching for guidelines or seeking assistance from our lecturers, instructors and those providing academic support when we have a lack of knowledge about the ethical use of AI.

Ultimately, we are grateful that the ICAI brought us together to learn from each other, encouraged us by allowing us to produce, and by emphasizing the importance of stakeholder roles in academic integrity, helped us put this into practice.

 Note: This blog post was authored by students. ICAI takes pride in highlighting student voices as students are a key stakeholder in higher education and the promotion of academic integrity. ICAI does not endorse or advocate for any position or statement made.

Where does Artificial intelligence (AI) belong in student life? The International Center for Academic Integrity (ICAI) tasked our small group of students from around the globe with tackling this question. Although far from experts, we each had experiences with this challenge of ethically integrating AI into academic life that prompted our interest in joining this discussion.

The diversity of our group was our strength, with members from Canada, Nigeria, and Türkiye, to name a few. We held frequent meetings to collaborate our thoughts and experiences with AI in our academic journeys, realizing several interesting points that united us despite different geographic contexts.

Recognizing the value of these insights, as the International Day of Action Student Working Group, we produced an infographic highlighting our findings regarding AI and Academics. We hoped that universities across the globe could share our work to provide a better understanding of the place AI has in student life.

The first commonality we discovered was how our different universities incorporated AI into their policies. Although they differed to some extent, depending on the institution, faculty, or personal instructor, they all spoke to the same values upheld by the ICAI. Whether one person’s instructor pushed for honesty in academics or another university’s policies worked to establish a sense of responsibility for one’s learning, they all reflected those ICAI values and established a culture of integrity.

This culture of integrity, we realized, was a fundamental aspect that could help define where AI belonged in student life. Perhaps unknowingly, we each participated in different ways to promote this culture in our universities. Some of us had helped run activities on campus that related to AI and its ethical use in student life, while others had helped to review a written guide for students related to the use of AI in coursework within their faculty.
These experiences taught us the importance of establishing a culture of academic integrity and we wanted our infographic to reflect the insights we had made. We wanted to develop and publicize a clear-cut list of Dos and Don'ts about the use of AI that would allow students to understand AI’s place in their lives and how they might individually, and ethically utilize AI.

Over several weeks, as a working group, we developed an infographic around this list of Dos and Don'ts and the goal of providing students with a clear sense of how to promote a culture of academic integrity. In doing so, we were able to clarify where the increasingly accessible and ever-evolving technology of AI might belong in students’ lives. Moreover, we did it in a way we felt was truly for students, by students.

Looking back, this project was some students’ first chance to work with peers from other universities and internationally. It helped us to explore our questions and curiosities about what constitutes appropriate use of AI. We drew on these experiences when speaking at a Student Panel event for the International Day of Action for Academic Integrity, and dare to say, it shaped how we saw ourselves as student citizens too.

Note: This blog post was authored by students. ICAI takes pride in highlighting student voices as students are a key stakeholder in higher education and the promotion of academic integrity. ICAI does not endorse or advocate for any position or statement made.