Last week I participated in an academic integrity panel entitled ‘Navigating academic integrity in the age of breakthrough technologies’. It aimed to be an event to bring together experts from Higher Education and technology (represented through Turnitin staff) to explore how to collectively address current challenges in Higher Education with academic integrity. The other panel members were Tricia Bertram Gallant (who of course needs no introductions to ICAI members!), Leslie Layne (University of Lynchburg, USA), Viatcheslav (Slava) Dmitriev (Rennes School of Business, France) and Patti West-Smith (Turnitin). In this blog, I am going to reflect on some of my takeaways.

Celebration!
My first contribution to the panel discussion was to say that we needed to celebrate this moment of academic integrity. I was looking at the participant numbers which had rocketed up to almost 700! Having worked in academic integrity for 20 years, I am aware that academic integrity events previously did not generate this level of interest, and feel very excited that they now do. I thanked participants and felt it was a moment to take stock of this surge of popularity, with an audience both widespread and eager to engage in academic integrity debate (based on the 50+ questions and comments from participants in many varied global locations).

Responsible use
Of course, the driver for popularity here is the focus on academic integrity AND artificial intelligence (AI). We began by debating ways to ensure students’ responsible use of AI tools while safeguarding academic standards. There were several comments about tools; I commented that our guidance to students should focus on their practices with tools, rather than putting tools into good and bad lists. I gave the example of an institutional course I have developed to guide students with ethical decision making using a traffic light model to distinguish between practices with AI tools that are ok (green), not ok (red) and those that need to be checked (amber), thus offering students a means of navigating AI responsibly with a universally understood symbol.

Writing in the age of AI
There was some contention in our debate on student writing in the age of AI. Slava declared that writing has become an ‘obsolete competency’, given the widespread student use of AI tools for writing. Tricia argued that writing remains a skill we want students to develop for the long term, within a set of ‘durable human skills’ of communication, empathy and interpersonal skills. Tricia also made the point that in secure testing conditions, writing can still be assessed appropriately. I agreed with the importance of still developing these human skills and that in some ways, the emergence of widespread AI has encouraged staff and students to value a closer human relationship with meaningful interactions.

Support for educators
We discussed how educators (faculty/academic staff) could be supported in adapting to the integration of AI into teaching and assessment. We all recognized this is a considerable challenge. I made the point that students, as much younger and more technologically informed, are likely to be much more advanced in their practices and knowledge of AI than educators. Leslie shared that there were huge challenges for educators at their institution with many not knowing what to do with new expectations at every level, including lack of knowledge about whether student use of AI constituted an academic conduct breach. Patti revealed that many queries from educators to Turnitin requested support with decisions about appropriate or inappropriate use of AI. Tricia reminded us that a further issue is that many faculty members may be experts in their academic field but not be trained to teach, therefore grappling with these decisions is especially problematic. I added that it is important to bring the focus back to education by moving away from trying to detect AI use or misuse, and moving back to trying to detect knowledge and whether students are meeting the learning outcomes. This is likely to mean changes to assessment, but assessment must be fit for purpose and appropriate for our current context. Slava mentioned that changes to assessment did not need to present excessive challenges and could be relatively easy for those who embraced technology as early adopters – who could be in any discipline, not necessarily in computer science. We all agreed that educators, whatever their discipline, are recommended to engage in AI, and institutions need to support the AI literacy development not just of students but also educators.

Regulatory frameworks
We ended the panel discussion with some consideration about how institutions may use centrally produced guidance from sector regulatory bodies. In the UK, QAA (Quality Assurance Agency) have been highly active in producing useful guidance for the sector. Similarly TEQSA (Tertiary Education Quality and Standards Agency) in Australia have extensive guidance. I also recommended the UNESCO international framework for AI which is useful for guidance in any context, especially for a focus on human rights and diversity and inclusion in relation to AI. Overall, I concluded that debates about academic integrity and AI are really beneficial for a community sharing approach.

 

 

Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership at https://academicintegrity.org/about/member-benefits and consider joining us by contacting . Be part of something great!