On February 23rd, the International Center for Academic Integrity and the National College Testing Association co-hosted me in delivering a webinar on the critical roles that testing and academic integrity professionals will play in the future of assessment in the GenAI era. In the webinar, I suggested that higher education institutions leverage this opportunity of GenAI to rethink both what teaching, learning and assessment look like, and the role that faculty versus professionals play in ensuring integrity in those processes.

There were a lot of questions raised during the webinar that I did not have time to address. So, I am grouping them here according to theme and then answering them in hopes of furthering the conversation beyond just those who attended the webinar live.

Ethics of GenAI Use
Q: When uploading work to AI for checking original work or helping to phrase things better, my understanding is that it then becomes part of the algorithm at that point. Has the person lost control of their original work at that point?
A: I guess it depends on your definition of “control”. I think control of original work is a concept that probably needs to be rethought. But, yes, my understanding is that in some tools you don’t have the option of choosing that your work stays out of the training data set. However, it’s not like giving your work to a human who could then turn your work in as their own. GenAI doesn’t work like that in that it doesn’t reuse your work in its complete form. Think about it this way – it breaks your work down into “more examples of words that can be predicted to follow other words that I will learn from”.
Q: Thinking about your point regarding the moral obligation of instructors, do you have thoughts on faculty using AI generated assignments, etc.?
A: I think faculty should be using GenAI to help them generate assignment ideas, come up with lesson plans, generate assessment questions etc. I don’t buy the argument that if we’re not allowing students to use GenAI, then faculty shouldn’t be allowed to use it either. Or that if faculty are using it, students should be allowed to. The purposes are different. The purpose of the instructor is to facilitate and assess learning and they should use whatever tools they have available to help them do that. (NOTE: I think this is better than the faculty just using assessment questions out of the instructor’s manual of the text book, or reusing the same questions over and over again. At least the instructor is involved and, ideally, critiquing the suggestions by GenAI.) The student’s role is to learn and demonstrate their learning; if the student hands in something generated by GenAI, then they’re not doing that. And it’s not just their purpose or roles that are different; students are novices in the discipline and faculty are experts. Thus, faculty can critique the Chatbot output more robustly and critically than can students; we just can’t assume that novice and experts can interface with these tools in the same way. Regardless, though, both faculty and students, can be transparent about their processes and their use of these tools.

The Future Role of Testing Centers
Q: What precedent is there for testing centers within California that broadly serve students in all three systems of higher education, and what are your thoughts on staffing centers with educational specialists who partner with instructors across the systems on the design of their assessments?
A: There is no precedent for that yet. That is something I’m working on. But yes, I would love testing centers to transform into assessment centers that have educational specialists and maybe even psychometricians on staff to help instructors create valid assessments of learning.
Q: What do you recommend for testing centers to upskill to become GenAI assessment specialists?
A: I think current testing center professionals can carve out time to learn how to use GenAI to do simple things like generate multiple exam questions from one set of questions; upskill questions from low taxonomy levels to high levels; revamp assessment questions to make them clearer. Also, I think professionals can learn how to use existing master-based testing platforms like PrairieLearn - to help faculty design their assessments.

Best Assessment & Integrity Practices
Q: I'm very curious about your advice to large institutions (in the U.S.), where we are teaching 150+ students, online. Ideas for navigating challenges with AI, outside of the typical best practices for assessment?
A: I think that online classes have a particular challenge that is not easy to overcome given the strenuous objections to remote proctoring services. However, generally the advice is the same for online and in-person classes: decide which assessments are for learning and which are of learning, and then secure the assessments that are OF learning. Security would either come in the form of in-person testing at a computer-based testing center or online proctoring. And, of course, instructors should do all of the other pedagogical, assessment, and class design techniques that should enhance intrinsic motivation, increase self-efficacy, and reduce barriers to learning and honest demonstration of that learning (this is what my book with David Rettinger will be about – hopefully released early 2025 by University of Oklahoma Press).
Q: What are your thoughts on balancing the monitoring testing centers do against student privacy concerns/student sensitive data protection? I’m thinking specifically about student fingerprints or facial ID as a means of identity confirmation before an assessment is given?
A: The privacy conversation is interesting to me. How many of us already give our facial ID over to our smart phones or the government, yet balk at doing it for school? Of course we should be sensitive to the data we have and protect that. However, we also should consider the tools we have at our disposal to make sure that we’re certifying the person who demonstrated their knowledge and abilities. We owe it to society to not give our degrees away without this certification. While it’s not always easy to resolve tensions between two values that are both “right” or “good” (e.g., privacy and integrity), I think it’s possible if we engage in thoughtful discussions rather than resort to heated and divided rhetoric.
Q: I have used UDL for years, but find that it's flexibility works well if you are in the "light it up" world, but not so much in the "lock it down". Many of the best practices seem to be a step backwards (e.g. oral exams). Curious therefore what you meant when you were emphasizing UDL.
A: I’m curious about your statement that oral exams are a step backwards! I see it as a step forward to remembering that we should be graduating people who have the skills to write AND speak (in some format) about their knowledge. So, I see oral assessment as a critical feature that was eliminated because it didn’t scale with the industrialization of higher education, not because it wasn’t good for learning or evaluating. There are a lot of innovative things happening with oral assessments right now – see this article 'Oral exams improve engineering student performance, motivation' for one example. I am hoping that GenAI tools will help offload background or administrative tasks for faculty and teaching assistants so that they will have time to engage in this way with their students. By emphasizing UDL, I mean that testing centers should think beyond the traditional notion of testing, which gets tagged as not UDL-friendly. We know that frequent testing and mastery-based testing can improve learning, so how can testing centers evolve to facilitate that while being UDL. Computers and GenAI are key to this. For example, students can choose when and what time to test. Students can test over and over again until they master. With computers, we can offer more options for assessment (one student could write, but another student could speak their answers). We can reduce logistical rigor while focusing on intellectual rigor. These are my musings at the moment (with the caveat that I am not a UDL expert).
Q: I currently work in a Writing Center, and in having conversations with our tutors, there is a level of resistance around engaging with GenAI tools. They want to have peer-to-peer conversations and feel that these tools will hinder the rapport building they thrive from. What would your response to this be? Should we be “forcing” engagement with these tools given that they are not going away? How do we strike a balance between engagement with these tools without losing what comes from human-human connection?
A: Such a great question! I think writing tutors should definitely be engaged with these tools and learning how to use them because their tutees are using them. They should learn how to identify when a writer might be over relying on a tool so they can engage with the in a conversation of what is being lost in that process and to hopefully convince them to do more of the writing themselves. I guess I would step back and ask “what is the purpose of our tutoring sessions” or “what is the learning goal?” and then work from there. Once we’ve defined those, we can determine if engaging with these tools would undermine or amplify those goals/purposes, and how. Then, perhaps we might even decide to structure or tutoring sessions differently that could be better for all involved. But, I’m not a writing tutor expert so I could be way off-base here. 😊
Q: Would you include the ability to let students share their writing process (like revision history) with the likes of oral exams and presentations as a way to further invigilate mastery and summative assessments?
A: Yes and no. Anything "remotely" proctored, which would include google doc version history and virtual presentations, is unsecure. A student could fake that version history (e.g., retype Chatgpt output instead of copying and pasting; have another person write the assignment who is logged in as them). And, of course, a student could fake a virtual presentation with the AI that now exists. Having said that, I would put those two examples in the bucket of attempting to assess process over product. Still not in the same invigilated bucket as proctored assessments, but they do raise the barrier for cheating and therefore make it less likely than in assessments that are external and completely unmonitored (e.g., homework).

If you would like to see the slides from the webinar, follow the links on the ICAI webinar page. You can access the recording on YouTube

Claudine Gay’s recent resignation as Harvard’s president has shed light on the fundamental problem with many institutions’ plagiarism policies and perceptions—they focus more on cheating than on learning. This underscores the need to revise how we view and respond to plagiarism by prioritizing learning, attending to the complexities of rhetorical expectations, and making room for the writing process.

The complaints about Gay’s writing identify instances of plagiarism—particularly passages that don’t meet Harvard’s expectations about quoting or paraphrasing. If Gay had turned in this work as a Harvard student, she would have been subjected to Harvard’s policy that students who submit work “without clear attribution to its sources will be subject to disciplinary action, up to and including requirement to withdraw from the College.” These consequences are consistent with other policies at R1 institutions across the country.

The independent reviewers tasked with investigating the first plagiarism allegations publicized in October didn’t do Gay any favors by identifying these as “a few instances of inadequate citation” instead of acknowledging them as plagiarism according to their institution’s definition of the term. This treatment led an undergraduate who sits on Harvard’s Honor Council to clarify, “There is one standard for me and my peers and another, much lower standard for our University’s president.”

The problem of this double standard lies not in the apparent leniency Harvard showed its president but in the comparative severity with which institutions respond to similar infractions by students. This points to a wider issue with the way plagiarism is framed, defined, and responded to across institutions of higher education. With its etymological connection to the Latin plagiarius—“kidnapper”—plagiarism has consistently been equated with criminality and is often presented as a “problem” to be fixed or punished. This allows campuses to control the writing process by scrutinizing the product. Yet, doing so obscures the goal of writing in the academic context: acquiring and dispersing knowledge. In classrooms, writing’s exigence lies in the learning process and the transferable knowledge and skills that are developed through writing.

Certainly, there are some forms of misconduct that go beyond limited rhetorical awareness (e.g. purchasing or selling an essay, claiming others’ ideas as one’s own). These instances unjustifiably curtail the intended learning process. However, the issue of cheating should be a separate conversation—one that raises investigable questions of motivation and purpose.

As writing center directors at a liberal arts college in central New York, our pedagogy is grounded in helping writers learn through the writing process and navigate the challenges of academic expectations. Higher education institutions need to reframe plagiarism to provide room to focus on rhetoric and learning. As writing scholars Linda Adler-Kassner, Chris Anson, and Rebecca Moore Howard have asserted, “All writers are always in a developmental trajectory.” They need to be provided with productive contexts where they can learn how to work through various rhetorical situations without fear of retribution.

Our call to separate the concept of cheating from plagiarism and to reframe the (mis)management of sources in terms of learnable, generic practices isn’t anything new. However, the circumstances surrounding Claudine Gay bring a new urgency to respond to these concerns. We need learning-centered plagiarism policies, guidelines that address the problem of cheating but separate it from the intertwining realities of originality, compositional labor, rhetorical flexibility, and disciplinary expectations. We need to move away from legalistic positions about plagiarism that seek to assume criminal intent or rationalize ignorance. As learners, writers need to be given opportunities to re-try and revise. Claudine Gay has acknowledged that this is what she has started doing— requesting that the publications allow her to make corrections in her articles. Instead of rejecting writers accused of source mismanagement, institutions should implement policies that invite them to learn, grow, and keep writing.


Attending international academic events nourishes students, especially in our disciplines and helps us to broaden our perspectives and networks. However, when we get the chance to take an active part in such events and have the chance to be heard, it not only boosts our confidence but also our motivation. That was one of the reasons why so many of us were eager to take part in the International Day of Action for Academic Integrity (IDoA). Our IDoA Student Working Group came together to discuss and generate a mindmap and an infographic about the ethical use of Artificial Intelligence (AI).

Another reason for our enthusiasm was involvement in the International Center for Academic Integrity (ICAI) community which brought us together. We are in different disciplines, degree programs, countries, and even different time zones. Nevertheless, academic integrity as an interdisciplinary issue allowed us to hear voices from other students all over the world. We realized that we have similar concerns, especially about the ethical use of AI, which is a rapidly changing technology that can appeal to students as a potential new tool for use in academics. The blurred line of engaging in AI with the act of academic integrity was the main concern that we focused on and shared our own experiences.

Normally we have structured frameworks in academia, however, the unpredictable nature of AI requires different institutions to take different approaches towards it. While some are working to integrate AI into their curriculum/teaching approach, some others have already been working on AI policies and guidelines. We also realized that different disciplines require different points of view, as the flow of the classes are different from each other. For those of us in the Health Sciences, AI may pose the risk of dangerous misinformation but can be used to help analyse data more effectively. On the other hand, students may not be allowed to engage with AI tools while writing essays in a Foreign Languages department, while in statistics classes students are sometimes encouraged to use AI-generated datasets for practice. That’s why it was so satisfying for us to come together and share our unique perspectives.

Finally, we all reflected that although it is difficult to draw lines and set rules for the use of AI by institutions for now, individuals already know and feel the right thing to be done. Most students take pride in creating their own work, following proper citation guidelines, and acknowledging where their sources and inspiration came from. Issues regarding academic integrity usually arise from a lack of knowledge rather than indifference or poor intent; the use of AI only further confuses students who lack guidelines or understanding on when and where it can be used. What works for all of us is practically searching for guidelines or seeking assistance from our lecturers, instructors and those providing academic support when we have a lack of knowledge about the ethical use of AI.

Ultimately, we are grateful that the ICAI brought us together to learn from each other, encouraged us by allowing us to produce, and by emphasizing the importance of stakeholder roles in academic integrity, helped us put this into practice.

 Note: This blog post was authored by students. ICAI takes pride in highlighting student voices as students are a key stakeholder in higher education and the promotion of academic integrity. ICAI does not endorse or advocate for any position or statement made.

Where does Artificial intelligence (AI) belong in student life? The International Center for Academic Integrity (ICAI) tasked our small group of students from around the globe with tackling this question. Although far from experts, we each had experiences with this challenge of ethically integrating AI into academic life that prompted our interest in joining this discussion.

The diversity of our group was our strength, with members from Canada, Nigeria, and Türkiye, to name a few. We held frequent meetings to collaborate our thoughts and experiences with AI in our academic journeys, realizing several interesting points that united us despite different geographic contexts.

Recognizing the value of these insights, as the International Day of Action Student Working Group, we produced an infographic highlighting our findings regarding AI and Academics. We hoped that universities across the globe could share our work to provide a better understanding of the place AI has in student life.

The first commonality we discovered was how our different universities incorporated AI into their policies. Although they differed to some extent, depending on the institution, faculty, or personal instructor, they all spoke to the same values upheld by the ICAI. Whether one person’s instructor pushed for honesty in academics or another university’s policies worked to establish a sense of responsibility for one’s learning, they all reflected those ICAI values and established a culture of integrity.

This culture of integrity, we realized, was a fundamental aspect that could help define where AI belonged in student life. Perhaps unknowingly, we each participated in different ways to promote this culture in our universities. Some of us had helped run activities on campus that related to AI and its ethical use in student life, while others had helped to review a written guide for students related to the use of AI in coursework within their faculty.
These experiences taught us the importance of establishing a culture of academic integrity and we wanted our infographic to reflect the insights we had made. We wanted to develop and publicize a clear-cut list of Dos and Don'ts about the use of AI that would allow students to understand AI’s place in their lives and how they might individually, and ethically utilize AI.

Over several weeks, as a working group, we developed an infographic around this list of Dos and Don'ts and the goal of providing students with a clear sense of how to promote a culture of academic integrity. In doing so, we were able to clarify where the increasingly accessible and ever-evolving technology of AI might belong in students’ lives. Moreover, we did it in a way we felt was truly for students, by students.

Looking back, this project was some students’ first chance to work with peers from other universities and internationally. It helped us to explore our questions and curiosities about what constitutes appropriate use of AI. We drew on these experiences when speaking at a Student Panel event for the International Day of Action for Academic Integrity, and dare to say, it shaped how we saw ourselves as student citizens too.

Note: This blog post was authored by students. ICAI takes pride in highlighting student voices as students are a key stakeholder in higher education and the promotion of academic integrity. ICAI does not endorse or advocate for any position or statement made.

As Academic Integrity Unit Student Ambassadors at the University of Toronto Mississauga (UTM), we aim to promote academic integrity and excellence across the UTM community. In this role, we actively engage in various campus events to spread the word on what it means to be an academically honest student, offer informative tips to help our peers avoid academic misconduct, and answer student inquiries. After learning about the ICAI’s 2023 IDoA Student Creativity Contest, our team knew that this would be a great opportunity to demonstrate our knowledge of academic integrity and create a valuable resource that could be shared with students worldwide.

The aim of our poster submission was to merge the technological aspect of artificial intelligence (AI) (seen in the chosen graphics) with the ICAI’s six fundamental values of academic integrity – honesty, trust, fairness, respect, responsibility, and courage – and highlight how these values can be applied by students in the age of AI. The poster was created to educate our peers of all different age groups around the globe, empowering them to champion academic integrity within the evolving landscape of AI.
As Generative AI software becomes increasingly accessible to students, it presents both challenges and unique opportunities in the classrooms that we are all a part of. We often observe our peers turning to these tools for assistance with personal and academic tasks as they believe it is a quick and easy way to complete quality work. However, we have learned this is not always the case. Some of our peers fall into the trap of relying entirely on generative AI to complete assignments, neglecting proper citations and taking credit for the ideas generated. This type of AI use may lead to unfairness to students who value academic integrity and always ensure to put hard work into their assignments. We feel that now, more than ever, maintaining academic integrity, despite the temptation to rely on AI, is integral to our educational journey. We believe that understanding the core values of academic integrity can help navigate the use of AI so that it can aid you in your studies, but not harm your learning experience. As academic integrity student ambassadors, we aim to highlight how AI can be used ethically within our classrooms and encourage others to do the same within their own institutions.

In our poster, we included the slogan “you are in charge of your own success” with the intention of emphasizing to students that while AI tools are available, they do not ensure academic success. Instead, we want to convey that showcasing one’s own learning and exhibiting the fundamental values is what will contribute to your success in the long run. We believe that championing academic integrity helps to create a level playing field and enables students to have a fair chance to succeed based on their own learning and effort.

We are delighted to be chosen as this year’s winner for the ICAI’s IDoA Student Creativity Contest in the categories of Best Poster and Best Overall Winner. We look forward to continuing to make a positive change in the world and encourage our fellow students to join us in championing academic integrity in the age of AI.

Note: This blog post was authored by students. ICAI takes pride in highlighting student voices as students are a key stakeholder in higher education and the promotion of academic integrity. ICAI does not endorse or advocate for any position or statement made.

There are only seven weeks until the ICAI’s Annual Conference in Calgary! Some of the upcoming blogs will show the range of programs that attendees will have access to. This week, enjoy a look at several of the posters that you will be able to engage with!

We’ll start out with this poster from Greer Murphy out of the University of Rochester:

As integrity administrators, we may find ourselves combatting perceptions of our as synonymous with that of classroom ‘cops,’ our policies as needlessly punitive, and any post-responsibility measures we offer as controlling, finger-wagging, and too onerous. By framing academic integrity as a matter of choice, and developing training with autonomy support in mind, we can improve how we support and relate to students. Preliminary results of an intervention offered via five (5) online modules to first-year students enrolled in large lecture courses indicates a small but positive effect. Academic integrity autonomy from students in the treatment group shows a marginally significant positive trend, whereas trends in the control group have not been positive. Despite high standard deviations due to our smaller-than-anticipated sample size, we are confident positive trends will be statistically significant once we build a larger dataset. We are conducting similar experiments in new courses this semester and hope to double or triple our sample size by the end of term. Visit our poster to learn what autonomy-supportive approaches to integrity training look like and discuss how you could implement similar initiatives on your campus!

Next, let’s look at a study by Greer Murphy (University of Rochester) and Courtney Cullen (University of Georgia):

The United States lags behind its global peers in comprehensively researching the current state of academic integrity policy. Using five elements of exemplary policy articulated by Bretag et al. (2011), our study represents the first systematic review of policies in the United States. One hundred institutions from across the country were selected for the study. In addition, we purposefully included institutions from all 50 states and involved institutions from a range of Carnegie Classification and Minority Serving Institution designations. We will share preliminary findings about linguistic accessibility and approach with conference attendees and ICAI members at this session. How do institutions across the United States stack up? Stop by our poster to find out!

There will also be posters presented by students from the University of Georgia. Comfort Ninson & Michael Jacoppo will be previewing a program in development:

What happens when a student is suspended or dismissed due to academic dishonesty? Have you thought of the reasons for these sanctions?  Having these students return strong with a true reflection of new energy, readiness, remorsefulness, self-forgiveness, recovery from shame and prepared to be an ambassador for academic honesty is the foundation of this new program. Keep your eye open for an A.R.C.H program poster for more details.

Finally, Mary Boyett (a student from the University of Georgia), will be providing a look at Artificial Intelligence detector tools from the student perspective:

With the rise of artificial intelligence in our world, instructors are looking at software such as TurnItIn which labels themselves as generative AI detectors. Some institutions, though, have recently banned instructors from using these detection tools. For example, Vanderbilt University found cases of "false positives" which encouraged other universities -- including Northwestern University -- to disable TurnItIn AI detection. TurnItIn, however, has their own evidence to show that their false positive rate is not significant enough to completely disregard its effectiveness. Other institutions did not ban detector tools, but did advise against the use of AI detection while considering ethical ways for students to use AI in the classroom. 

The breadth of topics in the poster session varies, with a topic for every individual interested in academic integrity. Don't miss out on this opportunity to engage with academic integrity champions from around the world. Register today.

 Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership at https://academicintegrity.org/about/member-benefits and consider joining us by contacting . Be part of something great.

Over two decades ago, Prensky (2001) introduced the term 'digital natives' to describe individuals born into a world of technological marvels, naturally used to the advancements of the era, contrary to digital immigrants who adapt to these advancements. Digital immigrants found themselves standing in the wind created by these digital natives, and to thrive in this dynamic environment, immigrants must adapt to their language and welcome the winds of change.

While winds can be disruptive, they also carry the potential for growth, much like the seeds of flowers dispersed by the breeze. The blossoming of these seeds may take time, face obstacles, or land in less-than-ideal conditions. One such seed, ushered in by the wind of change, is the impactful presence of Artificial Intelligence in our education systems. Concerns have been raised about inappropriate use of AI by students, leading to plagiarism and cheating, which have been persisting for decades even before AI. The key is not to nurture these harmful practices, but to create conducive environments for the positive impacts of AI. 

 Wind of change image

(Note. All the elements above created by DALL-E2 with the prompts: ‘wind cartoon png’, ‘seed scattered with wind png’, ‘A girl holding a seed in her hands and looking it with love and caring in a realistic style’.)

As a student, encountering rules like 'the use of Artificial Intelligence in assignments is strictly prohibited' is disheartening. I see it as a seed carried by the winds of technological change, one that has the potential to bloom into something beautiful. However, for this growth to occur, I require the right conditions: transparent use of AI within ethical frameworks. I need to understand how AI can enrich my imagination, enhance my performance, and contribute to a critical stance. Numerous examples highlight the beneficial uses of AI, such as supporting engagement, improving writing, enhancing critical thinking, and supporting research. In essence, there are vibrant flower gardens where the seeds of Artificial Intelligence, brought by the wind of change, can flourish.

Being a student navigating the digital landscape, I have sown the seeds of change by delving into the realm of Artificial Intelligence for my Master's thesis with a focus on the ethical use of Artificial Intelligence in the process of academic writing. In this study, I have tailored an academic writing rubric by adapting Razı's (2023) work to get feedback for academic studies from AI. With this study, I aim to add the positive impacts as educational material to the "Facing Academic Integrity Threats" Erasmus+ project. In this way, I plan to disseminate a resource that can be utilized by peers, researchers, and educators to explore the impact of AI when utilized ethically in academic settings by dispelling the negative reputation surrounding AI in writing, showcasing its potential benefits for all stakeholders.

In essence, my goal is to cultivate the seed I have planted into a flourishing flower of responsible AI use. I believe my peers are also nurturing their seeds, collectively sowing gardens that will shape the ethical use of Artificial Intelligence for future generations of digital natives. Together, we will embrace the ethical horizon and hopefully witness the growth of AI as a valuable ally in our academic pursuits.

Wind of change image 2

(Note. Created by DALL-E2 with the prompt: ‘Young people sowing seeds, watering flowers at a garden in high technology environment in a realistic style.’)

Prensky, M. (2001). Digital Natives, Digital Immigrants: Part 1. On the Horizon, 9(5), 1-6. https://doi.org/10.1108/10748120110424816
Razı, S. (2023). Emergency remote teaching adaptation of the anonymous multi-mediated writing model. System,113, 102981 https://doi.org/10.1016/j.system.2023.102981

There was a new first for academic integrity recently when the pioneering new ACARI Conference was inaugurated to promote academic integrity in Asia, the Middle East and Africa. Hosted by Middlesex University Dubai from December 17-19 2023, the event was co-founded by Dr Zeenath Reza Khan, Dr Salim Razi, Dr Shahid Soroya and Dr Muaawia Hamza, and chaired by Dr Sreejith Balasubramaniam. Getting a major new international event off the ground and joining up an academic and research community across huge regions to provide a voice for diverse speakers is a significant achievement that all those involved are undoubtedly and deservedly proud of. It was great to see speakers from countries including Pakistan, India, Bangladesh, Georgia, Iran, Turkey, UAE, Saudi Arabia, Qatar and Morocco contributing to this event.

I participated and presented remotely on ‘Using Universal Design for Learning principles to improve inclusion in academic integrity policies, procedures and teaching’ (see my blog from December 11, 2023) in the ‘Academic Integrity and Students’ strand. I was fascinated to hear of very diverse approaches to academic integrity related to student sanctions, use of artificial intelligence and detection tools in the other sessions in the strand given by presenters from India and Pakistan.

As a remote attendee in a different time zone, I could not attend the whole conference, so will focus on a few highlights. Prof Ann Rogerson (University of Wollongong, Australia) gave a stirring keynote session on ‘The Importance of understanding transitions and disciplinary norms for assessment, research and educational integrity’. Something I really agreed with her about was the argument that students need support with transitions at every level, not just the high school to undergraduate stage, but also at further degree levels, especially for international students who come with different prior learning. She explained that at UoW, students need to complete a 4 hour academic integrity module in order to get access to their grades and that a low penalty of -5% for first minor breaches led to limited repeat transgressions. Again, an area of great interest!

I really enjoyed the panel discussions with speakers from institutions in Asia, the Middle East and Africa. One was entitled ‘Global Education, Local Values’ moderated by Dr Salim Razi from Çanakkale Onsekiz Mart University, Turkey. The panel discussed the importance of institutional structures joining up to support learning about academic integrity, such as through the library, centres for academic success and quality assurance to ensure success. One member noted a gap between student and faculty beliefs about acceptable practice and the reasons given by students for transgressions such as feeling they are not able to say things better than experts and therefore needing to copy. These are familiar challenges!

I attended a great workshop by Dr Sonja Bjelobaba from University of Uppsala, Sweden on ‘Teacher Training in Ethics and Integrity for classrooms in the age of AI’ in which she used Menti to facilitate a very lively discussion among delegates on ethical decision making by students and staff regarding artificial intelligence. Is it ethical for staff to use AI to write feedback to students? This generated a lot of debate!

We are now preparing for the forthcoming ICAI 2024 Annual Conference – look out for blogs leading up to it!

It is the start of the new year. Since Boxing Day, you have likely been inundated with new things to try for the New Year, New You! Unlike those cosmetic products, diet pills, and work out devices, I am not going to ask you to make major changes in your practice today. Instead, I am going to tote the same old strategies to help reduce academic misconduct: communication. Communication should happen in your course syllabus, on every assignment, and to clarify why in the course of your pedagogy. Let's take a look back at some of the ways communication has been promoted through this blog:

  • Syllabi Design: though posted at the height of the COVID-19 pandemic, clarifying course expectations around academic integrity continues to be critical in helping maintain academic integrity in the classroom. The syllabus is more than just a laundry list of due dates for students, its the contract for the course. (Also, if you have already started ungrading, keep it up!)

  • Setting Expectations: save yourself and your students the confusion of muddling through. Set clear expecations early and communicate those expectations often.

  • Don't reinvent the wheel: Check out the interventions provided by your institution. You do not need to create anything from scratch, let your administration do it for you! Use what already existst to educate your students on academic integrity.

  • Be consistent: Reporting academic misconduct is critical, but so is how you approach these conversations.

What strategies are you continuing this year? Tell us by tweeting @TweetCAI.