Artificial intelligence1 has existed in the academic integrity space for several years, but the release of OpenAI’s ChatGPT has caused the term to become synonymous with student academic misconduct. In fact, it has been banned from New York City and Seattle public schools. Teachers and higher education instructors seemingly face limited options for assessments, they can either get creative with more “authentic assessments” or remove most opportunities to cheat by ensuring all graded assessments are completed in person. This leaves online courses – especially asynchronous courses – at a disadvantage. 

I would liken this to a moral panic. That is not to say there are not some legitimate concerns, but one must ask how practitioners and instructors did not see this coming. We have issues with students using online translation tools in foreign language courses and paraphrase mixers in writing intensive courses. Contract cheating is equally insidious, though perhaps more cost-prohibitive. In other words, dealing with ChatGPT will soon become just another day at the office. Already, private individuals have developed AI-generated text, such as GPTZeroAI Writing Check, CrossPlag, and OpenAI itself. Before we know it, plagiarism detection2 providers, like TurnItIn, will have their own AI-writing checks. 

The one positive from having ChatGPT in the national and international news has been the increased focus on academic integrity. Faculty are talking about it in online forums (e.g., Reddit’s r/Professors, Twitter), administrations are considering updates to their often underutilized academic integrity policies and providing guides to faculty through their teaching & learning centers (e.g., University of GeorgiaUniversity of Washington), regulatory bodies (e.g. TEQSA) are outlining the risks of artificial intelligence, and even students are sharing their opinions online and by writing in their student newspapers (e.g., GeorgetownWashington University in St. LouisTulane). [My only concern is that this is a lengthy sentence.]

For practitioners, this is an opportunity to continue dialogue around integrity. We should use this opportunity to shine a light focus is on the policies that may need an update. Use this global discourse to encourage the development of a culture of integrity on your campuses.

Tell ICAI, is Artificial Intelligence a friend, foe, or neither by tweeting @TweetCAI or posting on Facebook, Instagram, or LinkedIn.


  1. ChatGPT is not artificial intelligence, but instead is a natural language processor. However, artificial intelligence is commonly used to describe ChatGPT, so that is how it will be described in this piece.
  2. Plagiarism detection is not what occurs with these programs. They are more accurately referred to as text-matching software. Again, plagiarism detection is the common term, and will be used here.