May 2023

During UC San Diego’s Virtual Symposium on “The Threat & Opportunities of Artificial Intelligence and Contract Cheating: Charting a Teaching & Learning Path Forward”, Guy Curtis gave a talk on the Scale of Contract Cheating. This blog post is a follow-up to that talk.


Text-matching software has made blatant copy-paste plagiarism almost impossible for students to get away with undetected. Contract cheating - the outsourcing of assignments to third parties like essay mills - seems like a logical alternative for lazy or time-poor students. When assignments are written afresh, so long as the writer has not plagiarised, they can elude text-matching software, and graders are usually unable to detect that the assignment was not written by the student whose name is on the front.

Despite the ease and lack of detection of contract cheating, it seems that very few students do it. In a 2017 mini meta-analysis conducted with my colleague Joe Clare, we found that only 3.5% of students admitted to contract cheating, something that text-matching software would not detect, yet over 20% of those same students admitted to plagiarism, which text-matching software would easily detect.

These findings led us to a new interesting research agenda - “why students don’t engage in contract cheating?”. Luckily, we had an exceptional honours student who wanted to work with us named Kiata Rundle. Working with Kiata 2018, we developed a list of reasons why students may not engage in contract cheating. This list drew on a focus group, relevant literature, and our own expertise in psychology, academic integrity, and criminology. Then, we gave this list of reasons to a large group of students, asking them to indicate their strongest reasons for not engaging in contract cheating. We also gave them an opportunity, in an open-ended question, to tell us any other reasons they had for not engaging in contract cheating. Finally, we measured various aspects of the students’ personality that we thought might predict their reasons for not contract cheating. The resulting study, published in Frontiers in Psychology in 2019 (and recognized in the ICAI Reader (2nd ed) as a foundational and influential research piece) presented the main reasons students refrain from engaging in contract cheating as falling into five broad categories: morals and norms; motivation for learning; fear of detection and punishment; self-efficacy and (mis)trust; and lack of opportunity. This list of five categories is in the order of importance that students, on average, rated these reasons for not engaging in contract cheating.

Our latest study (stemming from Kiata's Ph.D.) continues this line of research. "Why students do not engage in contract cheating: a closer look" (which will be available June 16th in the International Journal for Educational Integrity at this link), addresses inconsistencies and unanswered questions concerning the psychological variables we measured as predictors of students’ reasons for not engaging in contract cheating. We added new measures to our overall survey to include a more reliable measure of a critical set of personality dimensions (the Dark Triad: Machiavellianism, psychopathy, narcissism), a measure of academic self-efficacy, and a measure of satisfaction and frustration of the psychological need for autonomy.

We also thematically analysed students' open-ended responses (obtained in the previous study) and reviewed subsequently-published literature on contract cheating to update the list of reasons for not engaging in contract cheating. With this updated list, we identified a sixth category that we called “academic environment”. The explicit reasons for not cheating that were part of this category included statements such as “I have respect for my lecturer” and “I believe that marking is fair”. Academic environment was rated as a more important category of reasons for not cheating than fear of detection and punishment, self-efficacy and (mis)trust, and lack of opportunity.

There is a lesson for higher education providers in the finding that the academic environment is an important reason for not contract cheating. Specifically, when people who teach within higher education put in effort, attempt to be fair, and are fair, students will reciprocate these efforts by being more likely to act with integrity. We think the same lesson can be drawn from the findings of Bretag et al.’s (2019) large-scale survey of contract cheating, which found that dissatisfaction with the learning and teaching environment was related to engagement in contract cheating. We should quickly add, however, that we do not believe there is a silver bullet that will slay contract cheating. Nonetheless, being mindful of good practice in teaching and assessment clearly can help.

In addition to this new finding concerning student reasons for avoiding contract cheating, we also found some interesting new results regarding the psychological predictors of students’ reasons for not engaging in contract cheating. Specifically, when students’ psychological needs for autonomy are satisfied in the educational context (that is, they feel they have more choice and control), they are more motivated to learn and this motivation for learning is a justification for not cheating. Again, the lesson for educators is clear. Give students some scope to make their own choices and pursue their interests because they are less likely to cheat on the things they want to do for themselves.

Kiata, Joe, and I have conducted further studies examining students’ reasons for cheating and not, and the psychological profiles that influence these reasons. We hope to submit these studies for peer review soon so that we can share some more of our interesting findings with the academic integrity community. As the TV host Rachel Maddow says, “watch this space”.  

Since the launch of ChatGPT in November 2022, I have been immersed in studying generative artificial intelligence (GenAI) and its potential impact (positive and negative) on higher education. Obviously, given my position as the Academic Integrity Office Director at the University of California, San Diego, I am particularly interested in the impact that GenAI has, and will have, on academic integrity and have had to figure out how to answer questions from faculty on how to prevent cheating with GenAI, how to talk to students about academic integrity in the era of GenAI, and how to document cases of integrity violations involving GenAI.

However, those that know me and my writings, understand that I see academic integrity as a teaching and learning issue, not as a student conduct issue. So, my interests in GenAI go beyond its impact on student behaviour to its impact on teachers. And it is within that interest that I recently finished Dan Fitzpatrick’s (with Amanda Fox and Brad Weinstein) book “The AI Classroom: The Ultimate Guide to Artificial Intelligence in Education”. Even though the book was written with K-12 teachers in mind, I wanted to delve into it for two reasons: 1) K-12 teachers are trained to teach, and thus have a lot more pedagogical and assessment knowledge than do most HE teachers (which means we have a lot to learn from them); and 2) it was touted as a very practical book to enable teachers to go from zero to hero in their use of GenAI in education.

I was not disappointed. Many of the other books I’m reading (like Christian’s “The Alignment Problem” or “Rebooting AI: Building Artificial Intelligence We Can Trust” by Marcus & Davis) are theoretical or focused on building an understanding of GenAI and its broad implications for society. So, “The AI Classroom” provides a nice change of pace: practical, specific, illustrative, and encouraging.

After a quick backgrounder on Artificial Intelligence in Part I, Fitzpatrick and colleagues drop the reader in Part II directly into how to use GenAI to make teaching and teaching prep more efficient, as well as to make learning for the students more engaging and inclusive. Then, in Part III, the authors review current AI tools that can be used by teachers and students, from tools that serve as educational platforms to those that assist with research, converting text-to-audio (and image, and video, and 3D and code), and, of course, AI chatbots (other than ChatGPT). Finally, in Part IV, the authors walk us through what educational leaders need to think about and do in the face of the “AI Revolution” and they muse about the future of Artificial Intelligence and the future of education.

While the examples and illustrations are all geared to K-12 teachers, the key lessons are easily transferable to the higher education environment. Following are 3 highlights for higher education faculty that I took away from this book:

  1. “Outsource your doing, not your thinking.” (p. 33). When considering if and how GenAI should be used in your teaching, make a list of all of things you have to do on the route to enable actual engagement with your students in the learning environment. Craft learning objectives, design your lessons, create learning activities, develop assessments, evaluate assessments and provide feedback on student learning. GenAI can help you do all of these things. However, “outsource your doing, not your thinking” also provides guidance for faculty in thinking about when they should allow students to use GenAI in their learning. If students can outsource some of the doing behind learning and thinking (e.g., formatting citations; conducting research), then can we free up their minds for higher order thinking? And for those faculty worried that GenAI tools like ChatGPT will “hinder student development”, the authors argue that student “development is already hindered” by an educational system designed for a different world than the ones that students will inhabit as professionals (p. 47). The key is to figure out how we can “design learning…to ensure students’ knowledge and skills are developed” (p. 77). For me, this means figuring out how we can use GenAI to help us, or free up time to enable us, to asses process (the doing), rather than the product (e.g., the essay). We have relied on products for a long time in higher education, especially as we became more industralizd and routinized as our classes grew larger and larger. However, products are meant to be learning artifacts and so if the integrity of the learning artifact cannot be assured, then process is what we need to focus on.
  1. Use an AI Learning Framework (p. 80) with GenAI to develop in students the human skills that cannot be outsourced to machines. The authors’ argument here is really about leveraging GenAI to make learning more active and engaged. Active and engaged learning is typical in the primary school level, but as students progress up the educational ladder, school tends to get less and less engaging. By the time they reach college or university, readings, podcasted lectures, homeworks and exams are the typical fare. Instead, faculty could leverage GenAI to ignite student curiousity, then guide them in asking questions about the topic, engage them in the topic through discourse (with a GenAI tool, their peers and the instructional team), create opportunities for students to critically think about the content, and then apply their knowledge in authentic or meaningful assessments. GenAI can help facilitate all of these steps in authentic and active learning, essentially experiential learning, which is a powerful and natural way to learn and develop.
  1. Develop your skills at Promptcrafting (p. 90). Getting GenAI to do what you want it to do can be tricky. So, learning how to prompt and ask questions of GenAI is a valuable skill for both faculty and our students. Fitzpatrick and colleagues provide a simple and effective framework for prompting GenAI know as P.R.E.P. Prompt the machine, give it a Role or voice, be Explicit in your instructions, and set the Parameters for the answer. I found this framework to be simply helpful, especially because the authors then provide template after template for doing this, as well as illustrative examples of the templates-in-action through showing us the prompt and the resulting ChatGPT output. You can use this PREP for yourself, but also teach it to your students to help them use GenAI more effectively.

To illustrate, here’s how a professor at the college level might use PREP to create course content. Below is an image of a prompt I gave to ChatGPT-4 based on the template Fitzpatrick and colleagues provide on p. 132.

 ChatGPT Prompt for May blog

 Next is an image of the first paragraph and MC question that ChatGPT generated in response to my prompt.

ChatGPT output 1 for May 21 blog 

And after generating the second two paragraphs and MC questions (not shown), ChatGPT fulfilled the rest of the request as follows

ChatGPT output 2 for May 21 blog

Not bad, and all in 20 seconds! As the authors note, with any GenAI output, you will want to evaluate it, and teach your students how to evaluate it, before using it. And the authors provide another framework for doing just that: E.D.I.T. Evaluate the content for language, facts & structure; Determine accuracy and corroborate sources; Identify biases and misinformation; and Transform the content to reflect any of your own adjustments (p. 100).

The book is filled with helpful after helpful examples and templates for using the PREP and EDIT frameworks to quickly and easily make GenAI your teaching assistant, all of which are easily adaptable to the higher education setting.


There are over 300 pages in this book, with nuggets of wisdom in almost each one and so I have only touched the surface in this blog post. The good news is that the book is a quick read and provides an easy-to-navigate just-in-time resource for college and university faculty who want to begin to use GenAI in their teaching but don't know where to start. And, since it sells for under $30 USD, “The AI Classroom: The Ultimate Guide to Artificial Intelligence in Education” is a great purchase overall and if you are a higher ed faculty member, instructional developer/designer, or academic integrity expert, I highly recommend adding it to your summer reading list. You may be surprised with how easily and quickly you can begin to play around with, and plan for, how GenAI can help you teach, assess, and engage with students in learning come time for the fall term.

This blog post is a written version of my opening remarks for UC San Diego’s Virtual Symposium on “The Threat & Opportunities of Artificial Intelligence and Contract Cheating: Charting a Teaching & Learning Path Forward”. Since this is a post, in part, about GenAI, I decided to try an experiment. I pasted my PPT notes into ChatGPT4 and asked it to generate this blog post for me. The content is mine, but ChatGPT4 gave it a title, put it into sections with headers and connected some of the dots normal in a blog post but not necessarily in PPT notes. I edited it and updated it with some new thoughts and adjusted some things for clarity. Did it save me any time using ChatGPT4? I don't think so. But, I do think it took on the drudgery work of formatting, which freed up my time to think. And this is a good thing, I believe.


The global higher education system plays a crucial role in society, promising to develop and certify the next generation of ethical citizens and professionals. Higher education institutions are responsible for producing all types of professionals who contribute to the economic growth in democratic societies. To fulfill this responsibility, institutions must ensure the integrity and value of their certifications. In recent years, the rise of contract cheating and the advent of AI-driven tools like ChatGPT (GenAI) have presented challenges to the traditional model of education. This blog post explores the opportunities and challenges that these developments bring to higher education and the need to rethink our approach to teaching, learning, and assessment.

The Social Contract and the Threat of Contract Cheating and AI

To raise the value of higher education certifications hold in today's society, colleges and universities must ensure that there is integrity throughout the process that leads to those certifications. For example, instructors are responsible for designing fair and honest (and valid) assessments. Students must honestly and fairly demonstrate their learning through these assessments. And, instructors must fairly and honestly evaluate student learning. However, the growing contract cheating industry and the emergence of GenAI threaten the integrity of this process.

The contract cheating industry - where humans complete academic work for our students - emerged to meet the demand from students looking to offload their academic work. Now, with GenAI, students can more quickly, cheaply, and easily outsource their learning and assessment completion to machines. This development raises questions about the value of certifications – are we certifying a student’s knowledge and abilities, their knowledge and abilities developed and executed in conjunction with GenAI, or the abilities of GenAI itself?

The Opportunity: Rethinking Higher Education

In 2008, I argued in “Academic Integrity in the 21st Century: A Teaching & Learning Imperative” that we must stop asking “how do we stop students from cheating” and start asking “how do we ensure students are learning?”. I argued this because it seemed that we were still trying to treat cheating and learning as if it was the 20th century and the internet did not yet exist.

The need to shift our focus from cheating to learning and from detecting to assessing is more imperative now because of the advent of GenAI. And, as GenAI becomes increasingly integrated into the tools we use daily (e.g., Microsoft 365; Google Workplace), we must acknowledge that we won’t be able to prevent its use and that, instead, we must help develop in our students the AI literacy and human skills that will serve them and our societies well. An educated citizenry will need to be able to effectively and ethically use GenAI for their work and to advance progress. A functioning democracy will need citizens who are able to discern information from mis- and dis-information.

However, in order to maintain the integrity of our certifications, we must also begin to ask questions about the boundaries between acceptable “cognitive offloading” and cheating. Individual instructors, programs, departments and institutions need to wrestle with these questions because the answers may depend on various factors, such as the course learning objectives, the program’s expected outcomes, the context (what is being assessed and why), and whether offloading undermines learning or frees up cognitive resources for higher-order tasks. This will not be a summer task that we can tackle in front of the new academic year. This is a much larger, wicked, problem. 

Addressing the Wicked Problem

The challenges posed to the educational system by GenAI and the contract cheating industry constitute a "wicked problem", that is, a problem which is difficult to define and solve. When Holtel (2016) challenged industries to tackle the wicked problem that artificial intelligence posed to them, he argued that "it cannot be resolved by tested methodologies, given procedures and best practices" but requires "a more sophisticated approach". This means that in higher education, we should ask faculty to learn about GenAI and how to adapt their teaching and assessments in light of it, but we should not leave it on their shoulders alone. All stakeholders need to be involved because, as Holtel argued, "the impact of artificial intelligence is far-reaching". Also, colleges and universities must question our "value systems" and experiment with new approaches to teaching, learning, and assessment. I suggest that higher education institutions start this process by asking and answering some Wicked Questions, such as:

  1. What is knowledge, and what does "original work" look like?
  2. What does "do your own work" mean in the age of AI and outsourcing?
  3. Is [fill in the blank] something we should be teaching or assessing?
    1. Writing
    2. Coding
    3. Languages
    4. etc
  1. How might we assess process over product?
  2. How should we assess learning, and when should students learn with or without AI?
  3. Should the traditional "certificate by credit hour" model be replaced with a competency-based model? 
  4. What is the point of time-limited tests and time-limited academic terms?
  5. How do we ensure the integrity and quality of our degrees?
  6. What are the new roles of instructors, tutors, librarians, and other academic support staff?
  7. Why do we do what we do now and should we do it differently?


The rise of GenAI and the contract cheating industry present both challenges and opportunities for higher education. It is essential for educators and administrators to rethink our approach to teaching, learning, and assessment and to engage in a systemic overhaul to ensure the integrity and value of a higher education certification. By asking the right wicked questions and embracing change, we can navigate the era of outsourcing and redefine higher education for the 21st century.

Pandora’s box is open. Generative AI (GenAI) exists and will continue to influence academic and instructional settings. For many, GenAI tools feel indispensable as our expectations for how academic work gets done are concurrently changing. How we choose to monitor, detect, and utilize this tool as individuals and at a university level will determine what will come from this technology. To explore the impact of GenAI (e.g., ChatGPT) on educational structure and learning, I participated on a student panel during UCSD’s Academic Integrity Virtual Symposium. This blog post summarizes my reflections on what me and my fellow panelists (Kharylle Rosario, Nathaniel Mackler, Sukham Sidhu) discussed with each other and our Panel Moderator (Avaneesh Narla).

Our panel discussed the range of impacts that GenAI has in education, including the fields of law, medicine, and even creative writing. In education, we acknowledged that while GenAI can be used as a tool to support learning, there is also the potential for malicious use. For example, the line between plagiarism and original work becomes blurred with GenAI use. Also, in many cases, we cannot identify the sources from which the GenAI is pulling, so there is an argument to be made that GenAI is stealing intellectual property when it generates text or images. With that being said, there is no strict legal code to guide GenAI use (at least in the United States), and in education, there is inconsistent implementation of restrictions on its use.

Detection of GenAI use is another hot topic in education. Tools like GPTZero provide a percentage likelihood that a provided text is AI generated or written by a human. While this novel tool could theoretically deter students from simply submitting GenAI output as their own work because of the risk of being detected, it is also true that GPTZero is not flawless. They claim to have a detection accuracy rate “higher than 98%,” which is outstanding for such a new technology. However, it's also worth noting that in this margin of error there can be false positives and negatives. With some institutions considering an expulsion policy for the use of GenAI, false positives could result in serious harm.

Our panel also discussed the ethical implications of GenAI use in other areas. Systems such as Microsoft's Tay chatbot had to be taken down within 16 hours after its 2016 launch because of inflammatory hate speech. Because the data GenAI was trained on is influenced by human biases, so too are the outputs. There is also the issue of the “Black Box” of artificial intelligence; those who created the code that drives GenAI do not really understand how it works. This Black Box effect is of concern because in some cases language generative tools have pulled from nonexistent sources, have been wildly incorrect, and have provided sources that are fabricated. On top of inaccuracy, there have been specific examples of tools like ChatGPT having strange and discriminatory outputs. It's also important to highlight that GPT-3, the predecessor to ChatGPT created by Open AI, was prone to “violent, sexist, and racist remarks” as well. According to a report by the Time Magazine, to curb these biases, OpenAI “sent tens of thousands of snippets of text to an outsourcing firm in Kenya'' using very graphic material to train the system to detect and filter these materials. This outsourcing on behalf of San-Francisco based firm Sama paid their workers between “$1.32 and $2 per hour depending on seniority and performance” on some of the most vile content the internet had to offer. While this relationship between OpenAI and Sama later fell through, the creation of artificially generated text relies on exploitative labor in the Global South.

The origins of GenAI systems are important to consider when assessing their usefulness in academic settings. These tools are still being worked on. They have flaws, and in many cases need human oversight to function well and ethically. The usefulness of these GenAI tools does not exist in a vacuum. While there have been many helpful uses of AI systems such as in predicting abnormalities early in health screenings and training models to translate obscure languages that may have otherwise been lost to time, the ethics and ground rules of this technology need to be seriously considered for general, academic, and industry use. I’m happy to have spoken on a panel of students from different majors in different departments, different educational backgrounds, and different perspectives on how artificial intelligence impacts our environments and learning. I hope that these conversations continue to happen so that we can figure out how to best use AI. The possibilities are beyond our imagination, but hopefully not beyond our control.

Understanding the Dissonance Between Student and Instructor Expectations

I recently moderated a student panel for UC San Diego's "Threats & Opportunities" Virtual Symposium. Although a student myself, at the doctoral level, I am also an instructor and I experienced  a dissonance between what the student panelists and I perceive to be the essential tasks of the learning procses. While instructors, including myself, believe that certain tasks, such as brainstorming and summarizing, are vital for developing critical thinking skills, our student panelists argued that these tasks can be repetitive, outdated, and therefore may not capture their attention. This poses a challenge for instructors: how can we redesign assignments to encourage students to engage with the learning outcomes while maintaining academic integrity? This blog post will explore how Generative Artificial Intelligence (GenAI), such as ChatGPT, can possibly be used to bridge this gap by enabling students to design their own assignments and evaluate their progress while emphasizing the importance of instructor engagement and feedback.

Redefining Assignments with GenAI

I believe that GenAI provides the opportunity to revolutionize the way assignments are designed and conducted. For the first time, it allows students to design their own assignments based on their interests and needs, while allowing for instructor input and feedback. Students can provide information about their interests, previous knowledge, and learning objectives, and the AI can generate assignment prompts that align with these preferences. This will allow students to take more ownership of their learning process and thus enhance their engagemen, while also ensuring that the assignments are more personally relevant and tailored. This process can be monitored and further improved by incorporating feedback from instructors, who can help refine the generated prompts to ensure that they are challenging, aligned with learning objectives, and encourage critical thinking.

To illustrate the potential of GenAI in transforming the learning experience, here are some examples of how it can be used to redesign assignments:

  • Guided Problem-Solving: GenAI can be used to create complex, real-world problems that require students to apply their knowledge and skills in a meaningful way. For example, in an environmental science class, the student can choose to study the water pollution crisis in their local community. While the instructor may not have sufficient knowledge to design low-level assessments, GenAI can review existing news and literature on the topic to quickly do so! Students can analyze data (real or artificially-generated), propose solutions, and evaluate the potential impacts of their proposed actions. This approach allows students to pursue a topic of relevance and interest to them while engaging in critical thinking. 
  • Collaborative Learning: A concern among instructors might be that personalized assessments will diminish opportunities for classroom interactions among students. But GenAI can also be used to facilitate collaborative learning experiences by creating virtual environments where students can bring their individualized projects together and share ideas. For instance, in a literature class, GenAI could generate a virtual roundtable discussion in which students assume the roles of characters from different novels they have read. They could then discuss a common theme or issue from their assigned character's perspective.
  • Personalized Feedback: In my opinion, a key benefit of GenAI tools is to facilitate the assessment of student performance and the provision of personalized feedback, helping students identify areas for improvement and guiding them toward a deeper understanding of the material. This must be balanced with input from the student, but if implemented properly, could lead to much greater student engagement with the material. Khan Academy is already platforming such a tool (KhanMigo), which will provide an excellent testing ground for such an idea. 
  • AI-Assisted Creativity: On the panel, the students repeatedly mentioned creativity as the key skill that they want to be tested in the age of GenAI. Here again, GenAI can be used to inspire students to think creatively and explore new ideas. In a design class, for example, GenAI could generate a series of constraints or requirements for a new product, such as a sustainable packaging design. Students could then be challenged to develop a concept that meets these requirements, using their knowledge of materials, aesthetics, and functionality.

 Fostering Trust and Building Relationships for Intrinsic Motivation

While students could potentially use GenAI to complete the AI-generated prompts, an essential aspect of the classroom that is enhanced when using GenAI is fostering an environment of trust and building strong relationships between students and instructors. Intrinsic motivation is a critical factor in students' willingness to engage with assignments and learn from them. Open communication, transparency, and a supportive atmosphere encourage intrinsic motivation for students to engage with assignments genuinely. While students could potentially use GenAI to complete the AI-generated prompts, trust and strong relationships help ensure they remain committed to their learning journey.

Limitations of GenAI in Education

However, while generative AI offers these new possibilities, it is essential to be aware of its limitations. Two of these limitations were discussed by the panel: the black box problem and potential biases. The black box problem refers to the difficulty in understanding the inner workings of AI algorithms, making it challenging to interpret and explain their decision-making processes. This can be particularly concerning when AI is used for generating assignments and providing feedback, as it may lead to a lack of transparency and accountability. To address the black box problem, instructors must remain actively involved in the assignment creation and evaluation process. Skepticism of AI-generated content, by both the instructor and the student, will be essential to ensure the quality of assessments.

Biases in AI systems are another concern, as AI algorithms learn from existing data, which may contain historical biases that can be inadvertently incorporated into the generated assignments or feedback. It is crucial for instructors to be vigilant in identifying and addressing any biases present in AI-generated content, and work with AI developers to improve the algorithms by using diverse and unbiased data sources.

Conclusion: Bridging the Gap with GenAI

Despite these limitations, GenAI offers a promising approach to bridging the gap between student and instructor expectations in education. By enabling students to design their own assignments and receive personalized feedback, GenAI can enhance student engagement, foster critical thinking, and promote a sense of ownership in the learning process. However, it is essential to be vigilant for instructors to monitor the assessments and provide consistent feedback. GenAI holds great promise that we are still realizing, and providing students agency in the learning process can be an excellent way to realize that potential.

(Note: This text was originally written by the author, but refined using feedback and examples provided by ChatGPT)