There’s been a whole host of negative attention surrounding the launch of ChatGPT and the impact that will have on academic integrity and student learning. Certainly ChatGPT is technology that can be misused. It is possible for an enterprising student to simply type a suitable prompt into the chatbot and generate an answer to an assignment that they could then hand in for academic credit. If the student has the correct skills and the assessment details are such that simply generating a solution is enough, then the student may be able to get a passing grade with very little work. But, despite these risks, could ChatGPT ever be considered as being a force for good in the educational system?

Much of the research I’ve been involved with throughout my career has considered how technology and opportunities can be misused. My work on contract cheating showed that students could pay a third party to complete assessments for them, missing out on the opportunities to learn. That is despite outsourcing being a completely valid process in the business world. There are still reasons that we have to verify that students have the ability to complete assessments for themselves in order to protect the value of their academic awards, so we can’t just let them outsource. Can the same be said about using ChatGPT?

I saw the challenges of artificial intelligence (AI) on the horizon some time ago and wrote about this in a chapter for Rettinger & Bertram Gallant's Cheating Academic Integrity book. Despite the book being under a year old, I fear the information I provided within it is already beginning to date. The launch of ChatGPT has provided educational challenges that were barely imaginable when I wrote the chapter.

As a Computer Scientist, who understands both academic integrity and generative artificial intelligence (GenAI), I am being asked to speak about this area a lot, both within educational settings and to the media. As I expressed during my presentation, The Impact of Artificial Intelligence on Academic Integrity at the UCSD Academic Integrity Virtual Symposium Series 2023, A.I. isn’t a flash in the pan. It is here to stay, and it will require us to consider many of our educational practices, including how we think about academic integrity. Some questions I posed during the presentation were, is there a place for ChatGPT in an educational setting, and how can this be used in an ethical way?

It may be surprising to learn that ChatGPT itself is able to express an opinion on these matters (or, more accurately, it can generate text that addresses a prompt asking about this). The answer provided, which I shared during the presentation, is open for critique and evaluation, but is remarkably balanced. Note that if you were to ask this same question again, you may get a different response, due to the way that a Large Language Model like ChatGPT operates.

As I showed during the presentation, ChatGPT is able to give a remarkably balanced view about the arguments for and against its use being more formally integrated into the educational system. The view also matches well many of the ideas that I’ve explored during my own presentations on the subject.

In this blog post, I’m only going to pick up on a few of the ideas, but many of the advantages that ChatGPT expresses relate to its ability to improve the learning experience for students. Better personalisation, the ability to support learners from different backgrounds, and, as my own students have themselves stated, the option to provide different explanations of concepts to those in a difficult-to-understand lecture. ChatGPT also picks up on the idea that it will be used in the real world. Unlike the contract cheating and outsourcing example I gave earlier, students can use ChatGPT in an assessed manner and still learn. It is the underlying assessment design that often needs to be considered further.

Despite all of this, safeguards in the system are still needed. ChatGPT notes the danger of overreliance on its output. I would add to this, that we still need to make sure that students have the foundational knowledge needed to complete tasks for themselves. An A.I. system will not always be available and there are real world situations where its use would not be appropriate. There are also issues to do with privacy, security and equity of access that will need to be explored at an institutional level. 

Perhaps controversially, I titled this blog post, ChatGPT – A Force For Good? The world is changing and LLMs are here to stay. We can’t ignore this technology. Many students will be using this in their future careers, and we need to be able to validate that they understand its strengths and weaknesses, can evaluate the quality of information produced, can use this in a productive manner, and can build upon the output produced. Opportunities are opening up for students that just wouldn’t have been possible for them before. As I have said many times, the road ahead is exciting!