2023

Since the launch of ChatGPT in November 2022, I have been immersed in studying generative artificial intelligence (GenAI) and its potential impact (positive and negative) on higher education. Obviously, given my position as the Academic Integrity Office Director at the University of California, San Diego, I am particularly interested in the impact that GenAI has, and will have, on academic integrity and have had to figure out how to answer questions from faculty on how to prevent cheating with GenAI, how to talk to students about academic integrity in the era of GenAI, and how to document cases of integrity violations involving GenAI.

However, those that know me and my writings, understand that I see academic integrity as a teaching and learning issue, not as a student conduct issue. So, my interests in GenAI go beyond its impact on student behaviour to its impact on teachers. And it is within that interest that I recently finished Dan Fitzpatrick’s (with Amanda Fox and Brad Weinstein) book “The AI Classroom: The Ultimate Guide to Artificial Intelligence in Education”. Even though the book was written with K-12 teachers in mind, I wanted to delve into it for two reasons: 1) K-12 teachers are trained to teach, and thus have a lot more pedagogical and assessment knowledge than do most HE teachers (which means we have a lot to learn from them); and 2) it was touted as a very practical book to enable teachers to go from zero to hero in their use of GenAI in education.

I was not disappointed. Many of the other books I’m reading (like Christian’s “The Alignment Problem” or “Rebooting AI: Building Artificial Intelligence We Can Trust” by Marcus & Davis) are theoretical or focused on building an understanding of GenAI and its broad implications for society. So, “The AI Classroom” provides a nice change of pace: practical, specific, illustrative, and encouraging.

After a quick backgrounder on Artificial Intelligence in Part I, Fitzpatrick and colleagues drop the reader in Part II directly into how to use GenAI to make teaching and teaching prep more efficient, as well as to make learning for the students more engaging and inclusive. Then, in Part III, the authors review current AI tools that can be used by teachers and students, from tools that serve as educational platforms to those that assist with research, converting text-to-audio (and image, and video, and 3D and code), and, of course, AI chatbots (other than ChatGPT). Finally, in Part IV, the authors walk us through what educational leaders need to think about and do in the face of the “AI Revolution” and they muse about the future of Artificial Intelligence and the future of education.

While the examples and illustrations are all geared to K-12 teachers, the key lessons are easily transferable to the higher education environment. Following are 3 highlights for higher education faculty that I took away from this book:

  1. “Outsource your doing, not your thinking.” (p. 33). When considering if and how GenAI should be used in your teaching, make a list of all of things you have to do on the route to enable actual engagement with your students in the learning environment. Craft learning objectives, design your lessons, create learning activities, develop assessments, evaluate assessments and provide feedback on student learning. GenAI can help you do all of these things. However, “outsource your doing, not your thinking” also provides guidance for faculty in thinking about when they should allow students to use GenAI in their learning. If students can outsource some of the doing behind learning and thinking (e.g., formatting citations; conducting research), then can we free up their minds for higher order thinking? And for those faculty worried that GenAI tools like ChatGPT will “hinder student development”, the authors argue that student “development is already hindered” by an educational system designed for a different world than the ones that students will inhabit as professionals (p. 47). The key is to figure out how we can “design learning…to ensure students’ knowledge and skills are developed” (p. 77). For me, this means figuring out how we can use GenAI to help us, or free up time to enable us, to asses process (the doing), rather than the product (e.g., the essay). We have relied on products for a long time in higher education, especially as we became more industralizd and routinized as our classes grew larger and larger. However, products are meant to be learning artifacts and so if the integrity of the learning artifact cannot be assured, then process is what we need to focus on.
  1. Use an AI Learning Framework (p. 80) with GenAI to develop in students the human skills that cannot be outsourced to machines. The authors’ argument here is really about leveraging GenAI to make learning more active and engaged. Active and engaged learning is typical in the primary school level, but as students progress up the educational ladder, school tends to get less and less engaging. By the time they reach college or university, readings, podcasted lectures, homeworks and exams are the typical fare. Instead, faculty could leverage GenAI to ignite student curiousity, then guide them in asking questions about the topic, engage them in the topic through discourse (with a GenAI tool, their peers and the instructional team), create opportunities for students to critically think about the content, and then apply their knowledge in authentic or meaningful assessments. GenAI can help facilitate all of these steps in authentic and active learning, essentially experiential learning, which is a powerful and natural way to learn and develop.
  1. Develop your skills at Promptcrafting (p. 90). Getting GenAI to do what you want it to do can be tricky. So, learning how to prompt and ask questions of GenAI is a valuable skill for both faculty and our students. Fitzpatrick and colleagues provide a simple and effective framework for prompting GenAI know as P.R.E.P. Prompt the machine, give it a Role or voice, be Explicit in your instructions, and set the Parameters for the answer. I found this framework to be simply helpful, especially because the authors then provide template after template for doing this, as well as illustrative examples of the templates-in-action through showing us the prompt and the resulting ChatGPT output. You can use this PREP for yourself, but also teach it to your students to help them use GenAI more effectively.

To illustrate, here’s how a professor at the college level might use PREP to create course content. Below is an image of a prompt I gave to ChatGPT-4 based on the template Fitzpatrick and colleagues provide on p. 132.

 ChatGPT Prompt for May blog

 Next is an image of the first paragraph and MC question that ChatGPT generated in response to my prompt.

ChatGPT output 1 for May 21 blog 

And after generating the second two paragraphs and MC questions (not shown), ChatGPT fulfilled the rest of the request as follows

ChatGPT output 2 for May 21 blog

Not bad, and all in 20 seconds! As the authors note, with any GenAI output, you will want to evaluate it, and teach your students how to evaluate it, before using it. And the authors provide another framework for doing just that: E.D.I.T. Evaluate the content for language, facts & structure; Determine accuracy and corroborate sources; Identify biases and misinformation; and Transform the content to reflect any of your own adjustments (p. 100).

The book is filled with helpful after helpful examples and templates for using the PREP and EDIT frameworks to quickly and easily make GenAI your teaching assistant, all of which are easily adaptable to the higher education setting.

Summary

There are over 300 pages in this book, with nuggets of wisdom in almost each one and so I have only touched the surface in this blog post. The good news is that the book is a quick read and provides an easy-to-navigate just-in-time resource for college and university faculty who want to begin to use GenAI in their teaching but don't know where to start. And, since it sells for under $30 USD, “The AI Classroom: The Ultimate Guide to Artificial Intelligence in Education” is a great purchase overall and if you are a higher ed faculty member, instructional developer/designer, or academic integrity expert, I highly recommend adding it to your summer reading list. You may be surprised with how easily and quickly you can begin to play around with, and plan for, how GenAI can help you teach, assess, and engage with students in learning come time for the fall term.

This blog post is a written version of my opening remarks for UC San Diego’s Virtual Symposium on “The Threat & Opportunities of Artificial Intelligence and Contract Cheating: Charting a Teaching & Learning Path Forward”. Since this is a post, in part, about GenAI, I decided to try an experiment. I pasted my PPT notes into ChatGPT4 and asked it to generate this blog post for me. The content is mine, but ChatGPT4 gave it a title, put it into sections with headers and connected some of the dots normal in a blog post but not necessarily in PPT notes. I edited it and updated it with some new thoughts and adjusted some things for clarity. Did it save me any time using ChatGPT4? I don't think so. But, I do think it took on the drudgery work of formatting, which freed up my time to think. And this is a good thing, I believe.

Introduction

The global higher education system plays a crucial role in society, promising to develop and certify the next generation of ethical citizens and professionals. Higher education institutions are responsible for producing all types of professionals who contribute to the economic growth in democratic societies. To fulfill this responsibility, institutions must ensure the integrity and value of their certifications. In recent years, the rise of contract cheating and the advent of AI-driven tools like ChatGPT (GenAI) have presented challenges to the traditional model of education. This blog post explores the opportunities and challenges that these developments bring to higher education and the need to rethink our approach to teaching, learning, and assessment.

The Social Contract and the Threat of Contract Cheating and AI

To raise the value of higher education certifications hold in today's society, colleges and universities must ensure that there is integrity throughout the process that leads to those certifications. For example, instructors are responsible for designing fair and honest (and valid) assessments. Students must honestly and fairly demonstrate their learning through these assessments. And, instructors must fairly and honestly evaluate student learning. However, the growing contract cheating industry and the emergence of GenAI threaten the integrity of this process.

The contract cheating industry - where humans complete academic work for our students - emerged to meet the demand from students looking to offload their academic work. Now, with GenAI, students can more quickly, cheaply, and easily outsource their learning and assessment completion to machines. This development raises questions about the value of certifications – are we certifying a student’s knowledge and abilities, their knowledge and abilities developed and executed in conjunction with GenAI, or the abilities of GenAI itself?

The Opportunity: Rethinking Higher Education

In 2008, I argued in “Academic Integrity in the 21st Century: A Teaching & Learning Imperative” that we must stop asking “how do we stop students from cheating” and start asking “how do we ensure students are learning?”. I argued this because it seemed that we were still trying to treat cheating and learning as if it was the 20th century and the internet did not yet exist.

The need to shift our focus from cheating to learning and from detecting to assessing is more imperative now because of the advent of GenAI. And, as GenAI becomes increasingly integrated into the tools we use daily (e.g., Microsoft 365; Google Workplace), we must acknowledge that we won’t be able to prevent its use and that, instead, we must help develop in our students the AI literacy and human skills that will serve them and our societies well. An educated citizenry will need to be able to effectively and ethically use GenAI for their work and to advance progress. A functioning democracy will need citizens who are able to discern information from mis- and dis-information.

However, in order to maintain the integrity of our certifications, we must also begin to ask questions about the boundaries between acceptable “cognitive offloading” and cheating. Individual instructors, programs, departments and institutions need to wrestle with these questions because the answers may depend on various factors, such as the course learning objectives, the program’s expected outcomes, the context (what is being assessed and why), and whether offloading undermines learning or frees up cognitive resources for higher-order tasks. This will not be a summer task that we can tackle in front of the new academic year. This is a much larger, wicked, problem. 

Addressing the Wicked Problem

The challenges posed to the educational system by GenAI and the contract cheating industry constitute a "wicked problem", that is, a problem which is difficult to define and solve. When Holtel (2016) challenged industries to tackle the wicked problem that artificial intelligence posed to them, he argued that "it cannot be resolved by tested methodologies, given procedures and best practices" but requires "a more sophisticated approach". This means that in higher education, we should ask faculty to learn about GenAI and how to adapt their teaching and assessments in light of it, but we should not leave it on their shoulders alone. All stakeholders need to be involved because, as Holtel argued, "the impact of artificial intelligence is far-reaching". Also, colleges and universities must question our "value systems" and experiment with new approaches to teaching, learning, and assessment. I suggest that higher education institutions start this process by asking and answering some Wicked Questions, such as:

  1. What is knowledge, and what does "original work" look like?
  2. What does "do your own work" mean in the age of AI and outsourcing?
  3. Is [fill in the blank] something we should be teaching or assessing?
    1. Writing
    2. Coding
    3. Languages
    4. etc
  1. How might we assess process over product?
  2. How should we assess learning, and when should students learn with or without AI?
  3. Should the traditional "certificate by credit hour" model be replaced with a competency-based model? 
  4. What is the point of time-limited tests and time-limited academic terms?
  5. How do we ensure the integrity and quality of our degrees?
  6. What are the new roles of instructors, tutors, librarians, and other academic support staff?
  7. Why do we do what we do now and should we do it differently?

Conclusion

The rise of GenAI and the contract cheating industry present both challenges and opportunities for higher education. It is essential for educators and administrators to rethink our approach to teaching, learning, and assessment and to engage in a systemic overhaul to ensure the integrity and value of a higher education certification. By asking the right wicked questions and embracing change, we can navigate the era of outsourcing and redefine higher education for the 21st century.

Pandora’s box is open. Generative AI (GenAI) exists and will continue to influence academic and instructional settings. For many, GenAI tools feel indispensable as our expectations for how academic work gets done are concurrently changing. How we choose to monitor, detect, and utilize this tool as individuals and at a university level will determine what will come from this technology. To explore the impact of GenAI (e.g., ChatGPT) on educational structure and learning, I participated on a student panel during UCSD’s Academic Integrity Virtual Symposium. This blog post summarizes my reflections on what me and my fellow panelists (Kharylle Rosario, Nathaniel Mackler, Sukham Sidhu) discussed with each other and our Panel Moderator (Avaneesh Narla).

Our panel discussed the range of impacts that GenAI has in education, including the fields of law, medicine, and even creative writing. In education, we acknowledged that while GenAI can be used as a tool to support learning, there is also the potential for malicious use. For example, the line between plagiarism and original work becomes blurred with GenAI use. Also, in many cases, we cannot identify the sources from which the GenAI is pulling, so there is an argument to be made that GenAI is stealing intellectual property when it generates text or images. With that being said, there is no strict legal code to guide GenAI use (at least in the United States), and in education, there is inconsistent implementation of restrictions on its use.

Detection of GenAI use is another hot topic in education. Tools like GPTZero provide a percentage likelihood that a provided text is AI generated or written by a human. While this novel tool could theoretically deter students from simply submitting GenAI output as their own work because of the risk of being detected, it is also true that GPTZero is not flawless. They claim to have a detection accuracy rate “higher than 98%,” which is outstanding for such a new technology. However, it's also worth noting that in this margin of error there can be false positives and negatives. With some institutions considering an expulsion policy for the use of GenAI, false positives could result in serious harm.

Our panel also discussed the ethical implications of GenAI use in other areas. Systems such as Microsoft's Tay chatbot had to be taken down within 16 hours after its 2016 launch because of inflammatory hate speech. Because the data GenAI was trained on is influenced by human biases, so too are the outputs. There is also the issue of the “Black Box” of artificial intelligence; those who created the code that drives GenAI do not really understand how it works. This Black Box effect is of concern because in some cases language generative tools have pulled from nonexistent sources, have been wildly incorrect, and have provided sources that are fabricated. On top of inaccuracy, there have been specific examples of tools like ChatGPT having strange and discriminatory outputs. It's also important to highlight that GPT-3, the predecessor to ChatGPT created by Open AI, was prone to “violent, sexist, and racist remarks” as well. According to a report by the Time Magazine, to curb these biases, OpenAI “sent tens of thousands of snippets of text to an outsourcing firm in Kenya'' using very graphic material to train the system to detect and filter these materials. This outsourcing on behalf of San-Francisco based firm Sama paid their workers between “$1.32 and $2 per hour depending on seniority and performance” on some of the most vile content the internet had to offer. While this relationship between OpenAI and Sama later fell through, the creation of artificially generated text relies on exploitative labor in the Global South.

The origins of GenAI systems are important to consider when assessing their usefulness in academic settings. These tools are still being worked on. They have flaws, and in many cases need human oversight to function well and ethically. The usefulness of these GenAI tools does not exist in a vacuum. While there have been many helpful uses of AI systems such as in predicting abnormalities early in health screenings and training models to translate obscure languages that may have otherwise been lost to time, the ethics and ground rules of this technology need to be seriously considered for general, academic, and industry use. I’m happy to have spoken on a panel of students from different majors in different departments, different educational backgrounds, and different perspectives on how artificial intelligence impacts our environments and learning. I hope that these conversations continue to happen so that we can figure out how to best use AI. The possibilities are beyond our imagination, but hopefully not beyond our control.

Understanding the Dissonance Between Student and Instructor Expectations

I recently moderated a student panel for UC San Diego's "Threats & Opportunities" Virtual Symposium. Although a student myself, at the doctoral level, I am also an instructor and I experienced  a dissonance between what the student panelists and I perceive to be the essential tasks of the learning procses. While instructors, including myself, believe that certain tasks, such as brainstorming and summarizing, are vital for developing critical thinking skills, our student panelists argued that these tasks can be repetitive, outdated, and therefore may not capture their attention. This poses a challenge for instructors: how can we redesign assignments to encourage students to engage with the learning outcomes while maintaining academic integrity? This blog post will explore how Generative Artificial Intelligence (GenAI), such as ChatGPT, can possibly be used to bridge this gap by enabling students to design their own assignments and evaluate their progress while emphasizing the importance of instructor engagement and feedback.

Redefining Assignments with GenAI

I believe that GenAI provides the opportunity to revolutionize the way assignments are designed and conducted. For the first time, it allows students to design their own assignments based on their interests and needs, while allowing for instructor input and feedback. Students can provide information about their interests, previous knowledge, and learning objectives, and the AI can generate assignment prompts that align with these preferences. This will allow students to take more ownership of their learning process and thus enhance their engagemen, while also ensuring that the assignments are more personally relevant and tailored. This process can be monitored and further improved by incorporating feedback from instructors, who can help refine the generated prompts to ensure that they are challenging, aligned with learning objectives, and encourage critical thinking.

To illustrate the potential of GenAI in transforming the learning experience, here are some examples of how it can be used to redesign assignments:

  • Guided Problem-Solving: GenAI can be used to create complex, real-world problems that require students to apply their knowledge and skills in a meaningful way. For example, in an environmental science class, the student can choose to study the water pollution crisis in their local community. While the instructor may not have sufficient knowledge to design low-level assessments, GenAI can review existing news and literature on the topic to quickly do so! Students can analyze data (real or artificially-generated), propose solutions, and evaluate the potential impacts of their proposed actions. This approach allows students to pursue a topic of relevance and interest to them while engaging in critical thinking. 
  • Collaborative Learning: A concern among instructors might be that personalized assessments will diminish opportunities for classroom interactions among students. But GenAI can also be used to facilitate collaborative learning experiences by creating virtual environments where students can bring their individualized projects together and share ideas. For instance, in a literature class, GenAI could generate a virtual roundtable discussion in which students assume the roles of characters from different novels they have read. They could then discuss a common theme or issue from their assigned character's perspective.
  • Personalized Feedback: In my opinion, a key benefit of GenAI tools is to facilitate the assessment of student performance and the provision of personalized feedback, helping students identify areas for improvement and guiding them toward a deeper understanding of the material. This must be balanced with input from the student, but if implemented properly, could lead to much greater student engagement with the material. Khan Academy is already platforming such a tool (KhanMigo), which will provide an excellent testing ground for such an idea. 
  • AI-Assisted Creativity: On the panel, the students repeatedly mentioned creativity as the key skill that they want to be tested in the age of GenAI. Here again, GenAI can be used to inspire students to think creatively and explore new ideas. In a design class, for example, GenAI could generate a series of constraints or requirements for a new product, such as a sustainable packaging design. Students could then be challenged to develop a concept that meets these requirements, using their knowledge of materials, aesthetics, and functionality.

 Fostering Trust and Building Relationships for Intrinsic Motivation

While students could potentially use GenAI to complete the AI-generated prompts, an essential aspect of the classroom that is enhanced when using GenAI is fostering an environment of trust and building strong relationships between students and instructors. Intrinsic motivation is a critical factor in students' willingness to engage with assignments and learn from them. Open communication, transparency, and a supportive atmosphere encourage intrinsic motivation for students to engage with assignments genuinely. While students could potentially use GenAI to complete the AI-generated prompts, trust and strong relationships help ensure they remain committed to their learning journey.

Limitations of GenAI in Education

However, while generative AI offers these new possibilities, it is essential to be aware of its limitations. Two of these limitations were discussed by the panel: the black box problem and potential biases. The black box problem refers to the difficulty in understanding the inner workings of AI algorithms, making it challenging to interpret and explain their decision-making processes. This can be particularly concerning when AI is used for generating assignments and providing feedback, as it may lead to a lack of transparency and accountability. To address the black box problem, instructors must remain actively involved in the assignment creation and evaluation process. Skepticism of AI-generated content, by both the instructor and the student, will be essential to ensure the quality of assessments.

Biases in AI systems are another concern, as AI algorithms learn from existing data, which may contain historical biases that can be inadvertently incorporated into the generated assignments or feedback. It is crucial for instructors to be vigilant in identifying and addressing any biases present in AI-generated content, and work with AI developers to improve the algorithms by using diverse and unbiased data sources.

Conclusion: Bridging the Gap with GenAI

Despite these limitations, GenAI offers a promising approach to bridging the gap between student and instructor expectations in education. By enabling students to design their own assignments and receive personalized feedback, GenAI can enhance student engagement, foster critical thinking, and promote a sense of ownership in the learning process. However, it is essential to be vigilant for instructors to monitor the assessments and provide consistent feedback. GenAI holds great promise that we are still realizing, and providing students agency in the learning process can be an excellent way to realize that potential.

(Note: This text was originally written by the author, but refined using feedback and examples provided by ChatGPT)

There’s been a whole host of negative attention surrounding the launch of ChatGPT and the impact that will have on academic integrity and student learning. Certainly ChatGPT is technology that can be misused. It is possible for an enterprising student to simply type a suitable prompt into the chatbot and generate an answer to an assignment that they could then hand in for academic credit. If the student has the correct skills and the assessment details are such that simply generating a solution is enough, then the student may be able to get a passing grade with very little work. But, despite these risks, could ChatGPT ever be considered as being a force for good in the educational system?

Much of the research I’ve been involved with throughout my career has considered how technology and opportunities can be misused. My work on contract cheating showed that students could pay a third party to complete assessments for them, missing out on the opportunities to learn. That is despite outsourcing being a completely valid process in the business world. There are still reasons that we have to verify that students have the ability to complete assessments for themselves in order to protect the value of their academic awards, so we can’t just let them outsource. Can the same be said about using ChatGPT?

I saw the challenges of artificial intelligence (AI) on the horizon some time ago and wrote about this in a chapter for Rettinger & Bertram Gallant's Cheating Academic Integrity book. Despite the book being under a year old, I fear the information I provided within it is already beginning to date. The launch of ChatGPT has provided educational challenges that were barely imaginable when I wrote the chapter.

As a Computer Scientist, who understands both academic integrity and generative artificial intelligence (GenAI), I am being asked to speak about this area a lot, both within educational settings and to the media. As I expressed during my presentation, The Impact of Artificial Intelligence on Academic Integrity at the UCSD Academic Integrity Virtual Symposium Series 2023, A.I. isn’t a flash in the pan. It is here to stay, and it will require us to consider many of our educational practices, including how we think about academic integrity. Some questions I posed during the presentation were, is there a place for ChatGPT in an educational setting, and how can this be used in an ethical way?

It may be surprising to learn that ChatGPT itself is able to express an opinion on these matters (or, more accurately, it can generate text that addresses a prompt asking about this). The answer provided, which I shared during the presentation, is open for critique and evaluation, but is remarkably balanced. Note that if you were to ask this same question again, you may get a different response, due to the way that a Large Language Model like ChatGPT operates.

As I showed during the presentation, ChatGPT is able to give a remarkably balanced view about the arguments for and against its use being more formally integrated into the educational system. The view also matches well many of the ideas that I’ve explored during my own presentations on the subject.

In this blog post, I’m only going to pick up on a few of the ideas, but many of the advantages that ChatGPT expresses relate to its ability to improve the learning experience for students. Better personalisation, the ability to support learners from different backgrounds, and, as my own students have themselves stated, the option to provide different explanations of concepts to those in a difficult-to-understand lecture. ChatGPT also picks up on the idea that it will be used in the real world. Unlike the contract cheating and outsourcing example I gave earlier, students can use ChatGPT in an assessed manner and still learn. It is the underlying assessment design that often needs to be considered further.

Despite all of this, safeguards in the system are still needed. ChatGPT notes the danger of overreliance on its output. I would add to this, that we still need to make sure that students have the foundational knowledge needed to complete tasks for themselves. An A.I. system will not always be available and there are real world situations where its use would not be appropriate. There are also issues to do with privacy, security and equity of access that will need to be explored at an institutional level. 

Perhaps controversially, I titled this blog post, ChatGPT – A Force For Good? The world is changing and LLMs are here to stay. We can’t ignore this technology. Many students will be using this in their future careers, and we need to be able to validate that they understand its strengths and weaknesses, can evaluate the quality of information produced, can use this in a productive manner, and can build upon the output produced. Opportunities are opening up for students that just wouldn’t have been possible for them before. As I have said many times, the road ahead is exciting!

Four people talking and working together.

It seems safe to say that successful academic endeavors involving working in groups are valuable experiences for students.  The experiences gained from working in groups, and the skills acquired, are generally accepted as being transferable to future employment and are highly valued by employers (see recent blog post: Group Work is not just for Students).  An idyllic group project would have our students effectively planning, communicating, collaborating, and creating to successfully reach a common goal.

More often than not, it seems, group projects are detested by students for a variety of reasons, some of which are perfectly reasonable.  The most often I hear in my own practice is that one (or more) group members contributed virtually nothing during the process.  While one might say that this itself is preparing students for how “real-world” workplaces can function, we should hold ourselves to higher standards and at the very least do our very best to encourage and guide our students toward productive, and contributory, collaboration with others.  The following will be a small selection of practices that have impacted the evolution of collaborative assignments in my own course.  More specifically, these are the most impactful practices that I’ve implemented with academic integrity in mind.  My projects are wholly collaborative so collusion is not a concern, though if you have (or plan to implement) group projects with additional required individual contributions, it certainly will be.  Examples of these types of individual contributions you may consider could include an individual student reflection on the collaborative process and their own contributions, and/or an individual student peer feedback form.

The current structure of the collaborative projects in my course includes requiring a maximum of three group members.  I’ve done groups of 4 before that have been successful but I’ve anecdotally had better experiences when they are limited to 3.  I select the groups in advance using information gained from an initial course survey focusing firstly on major, and secondly (if needed) on personal hobbies/interests.  Students are generally free to ‘switch’ groups after the first project if they feel so inclined.  Unsurprisingly, the most successful groupings generally stick together for the remainder of the term.

Each collaborative project is assigned with a set of specific and detailed guidelines.  I call it “The Roadmap”.  This roadmap describes and reiterates the collaborative nature of the project and specifically states that all group members should engage with all parts of the project; division of tasks for later assembly together is not the goal as each student is responsible for the entirety of the project.  To facilitate this wholistic collaborative approach, the projects include a tracking page where they keep a record of contributions and edits.  This alone makes it incredibly difficult for a non-contributory member to assume ownership of others’ work on within the project.

Both of the previous examples relate to the beginning, or assigning, stage of the collaborative project.  There are several other practices worth consideration at this stage including: providing or requiring groups to create a project timeline, providing guidelines for tracking communication within the group, and providing examples of what is and is not acceptable collaboration (if applicable to your project).  Of all the ideas and suggestions above, the most impactful practice I’ve implemented has been to provide explicitly clear directions and expectations when assigning a collaborative project in my course.  Instances of academic dishonesty have been few and far between are nearly always a result of failure to follow the explicit directions of “The Roadmap”.  On the rare occasion that suspected integrity violations are not specifically addressed in the roadmap, they serve to inform my own future practice (i.e. I change ‘The Roadmap” moving forward).

Photo of a smartphone on a desk next to a person

Academic integrity is a difficult topic of conversation. While every campus is different, most of employ plenty of individuals who are asked to have one-on-one conversations with students about academic integrity. Instructors may need to ask a student about a suspicious incident. Staff members may need to interview a student about a potential violation. Conduct board representatives may have to discuss incidents with students. These conversations might be in person, by email, in a classroom, during a hearing, etc. Most institutions are very intentional in how they go about ensuring academic integrity – how can we be equally intentional when we converse with individuals involved with or affected by potential acts of academic dishonesty?

I arrived at this topic after considering my own experiences working with academic integrity on my campus and what might be worth sharing. My role as it relates to academic integrity is to serve as an intermediary of sorts by investigating incidents, interacting with both students and instructors, explaining policies, and ultimately making a recommendation as to whether a violation occurred based on available evidence. One of my other roles on campus, however, is as an academic advising administrator. I have worked with academic advising in various capacities for my entire career. Academic advisors spend most of their time having conversations. Good advisors, as you might imagine, also spend reflecting on those conversations to ensure they are meaningful and effective. I can easily recognize how much influence this experience working one-on-one with students in an advising capacity has had on how I approach my academic integrity duties.

If you are someone who routinely engages in one-on-one conversations with students about academic integrity and you were so inclined, you could explore a myriad of resources related to developing conversational skills provided directly from the largest professional organization for academic advising, NACADA. You could also explore the adjacent field of academic coaching or jump right into fields you might already be familiar with such as counseling, education, communication, social work, etc. If you have time to borrow ideas a profession or discipline that centers on interpersonal communication then it would be worth the effort. However, examining your conversational approach does not have to be a time-consuming endeavor. You might start by taking fifteen minutes to reflect on the conversations you have had so far. Develop more intentionality by just asking yourself some basic questions:

  • What is my role as it relates to academic integrity?
  • What am I typically trying to accomplish when I converse with students about a potential act of academic dishonesty?
  • What unintended consequences could result from these conversations?
  • How should I have these conversations (in-person, phone, conference software, etc)?
  • Where should I have these conversations?
  • Is there language I should always include or avoid in my conversations?
  • Should I implement a consistent structure to my conversations?
  • What assumptions, perspectives, or biases do I bring to these conversations?
  • What assumptions, perspectives, or biases might students bring to these conversations?
  • If I do have a gut feeling about whether an act of academic dishonesty has occurred, what if I am wrong?

Precisely which questions you ask yourself may not be as important. The goal is to make some time to reflect, decide, try it out, and then reflect again. Conversations about academic integrity can be emotional and may have a lasting impact on everyone involved. By examining our own approach we can ensure that our conversations are intentional and effective.

Picture of Dr. Paul Cronan

Dr. Timothy Paul Cronan was the 2022 recipient of the ICAI Lifetime Achievement Award. He is an internationally known teacher and researcher who also performs a wide variety of service obligations as a professor in the Information Systems Department. He has served as a faculty member since 1979 and has authored many papers and led conference sessions based on academic integrity. He was an early pioneer in recognizing the impacts of academic integrity. He has also published in numerous high-quality journals in the Information Systems field, was a co-founder of the Teaching Center, and has won numerous prestigious awards related to teaching and mentoring during his career. He has developed academic programs and served in a department leadership capacity.

Although he clearly has many obligations, academic integrity has been a cause that he has remained committed to. In 2012, Cronan led a charge to change how academic integrity was handled at the University of Arkansas. Core values that drove the development of the new policy included improving the campus culture, building trust among faculty and students (many violations were not being reported), and fairness and consistency. At the time, there were no commonly agreed upon violations or sanctions, so each faculty member was tasked with handling issues independently. The system that emerged is unique and has been positively received by faculty, staff, and students.

Key tenets of the policy are:

  • Academic Integrity Monitors (AIMs) for each college at the Associate Dean level meet individually with students alleged to have violated the policy and recommend whether a violation occurred.
  • All University Academic Integrity Board (AUAIB) – This is a formal hearing body that reviews any contested allegations, and it is comprised of a faculty representative from each college, graduate student, and undergraduate student.
  • Appellate decisions are rendered by the Chancellor and the Provost.
  • Developed a standalone Office of Academic Initiatives and Integrity (OAII) reporting to the Provost’s Office. The OAII is responsible for campus-wide prevention and programming specific to academic integrity initiatives that engage students, staff, and faculty.
  • The policy mandates faculty report alleged violations of academic dishonesty.
  • A more centralized approach to reporting led to the development of a Sanction Rubric that addresses behaviors consistently across campus and supports the use of educational sanctions.

Cronan did more than lead the thought process behind the new policy. He stepped forward as the public face of the policy and led implementation by speaking at all colleges on campus, he reported back the statistics related to the policy, and five years later assessed faculty buy-in. He built a culture of buy-in by leading multiple presentations on campus showing the impacts of the system.

As Provost Terry Martin stated, “Over the last ten years the level of buy-in has significantly improved among faculty due to the system’s equitable and efficient nature. Cronan’s leadership was invaluable in terms of assessment, policy design, and coalition building.”

Cronan’s contributions to academic integrity have been invaluable on the University of Arkansas campus, and he has willingly shared information and ideas with other schools. Many lessons can be learned from his contributions.

Full audience with hands raised.

The pandemic served as a catalyst for change around the world and across many different sectors. The educational sector was dramatically affected and required us to rethink our long-standing pedagogies and organizational structures. Just as it seemed we were settling into our new normal, artificial intelligence exploded onto the scene and served to play perhaps even a larger role as a disruptor to our set practices across the educational landscape. The initial panic associated with artificial intelligence is slowly being replaced by an appreciation and understanding of the incredible opportunities we have as educators, researchers, and leaders, to positively impact the educational experience of our students.

Both the pandemic and artificial intelligence have created circumstances that require us to combine our intellectual resources and capacities to evolve and be relevant with our effort to support student learning and the work of academia. The 2023 International Center for Academic Integrity’s (ICAI) Conference afforded us the opportunity to congregate both in person and virtually to share ideas, best practices, current research, and common interests. Rich discussions and networking opportunities resulted from the conference. Perhaps one of the pearls from the ICAI conference (current and past) was the formation of Consortiums organized by geographic location or practice specialty, which allowed attendees to collaborate on academic integrity matters specific to their nuanced interests. A recent change that has allowed just such cooperation and collaboration, is the newly developed Canadian National Consortium (ICAI Canada), which originally started as the Canadian Regional Consortium. The original group was founded by three Canadians in 2014 (Amanda McKenzie, Troy Brooks, and Jo Hinchcliffe). The impetus for the re-development came from a discussion at the Canada Day at the ICAI 2022 conference where attendees expressed an interest in recognition as a national rather than a regional group, more regular meetings, a more active presence on the ICAI webpage, and a repository for Canadian content. Terms of reference, governance positions, and terms for these positions were described by a small governance working group that was spearheaded by Amanda McKenzie and formed after the 2022 meeting. It is important to note that the governance working group heralded from 4 different provinces, and represented several different higher educational roles (faculty, academic integrity specialists, librarians, center for teaching and learning experts, and researchers). The working group membership was formed to ensure a robust and thoughtful approach to the work of growing ICAI Canada.

Canada stretches over 2500 miles from east to west coast, and 2300 miles from north to south. As such a large country it has become important for us to connect, share, and collaborate around academic integrity issues. It is hoped that ICAI Canada will promote these connections. Currently, we have 8 of our 10 provinces holding positions as advisors on the board, and will continue our efforts to ensure all provinces and territories have a voice at the table. The inaugural meeting will take place in the spring of 2023 when we hope to have a discussion about our way forward in keeping with our original purpose to “serve as an education and evidence-informed resource for Canadian universities, colleges, and other educational institutions working to create cultures of integrity. This group aspires to be bilingual in honor of the two official languages in Canada (English and French)”. While we have a lot of work ahead of us, we are energized by the commitment and enthusiasm of our executive, advisory board, and membership. If you would like more information about ICAI Canada or how we worked through and organized our group please feel free to reach out to any one of the executive board.

Jennie Miron, Chair ()

Angela Clark, Vice-chair ()

Leanne Morrow, Secretary ()

Allyson Miller, Event Coordinator ()

Rachel Gorjup, Communication Coordinator ()

Student group work is intended to support and enhance creativity, productivity, and collaboration between students. The skills associated with successful group work are considered transferable to the workplace and are highly valued by employers across industries (Grizmek et al., 2020). Given the nuances of our new work worlds and the realities that our graduates will be likely to face complex problems that require them to navigate and negotiate solutions in teams, group work remains a worthwhile endeavour.  But the merits of group work are not limited to student work.

Artificial intelligence has exploded across the world, creating tremendous opportunities, and many questions about its ethical use and deployment in different settings. In the educational sector, it has served to be a disruptor that is challenging us to approach our teaching and assessment practices differently. In Ontario, Canada one group of higher education professionals considered how to tackle some of the issues and questions that were unfolding about artificial intelligence in real-time. Members of the Academic Integrity Council of Ontario (AICO) came together to create an information sheet for faculty that began to outline some strategies and opportunities for artificial intelligence in higher education learning settings, along with practical tips for working with these applications in an ethical manner that then support and protect academic integrity across academic work.

Five AICO members in teaching, leadership, and academic integrity specialist roles came together online through three writing sessions to create a draft document that provided an overview of the issue, a definition of artificial intelligence, stakeholder considerations for the ethical use of artificial intelligence in higher education, discussion about assessment issues, opportunities for the use of artificial intelligence applications, points about the limitations of artificial intelligence, and discourse related to citation/acknowledgment considerations. The draft was then edited and revised by seven separate higher education professionals. The resulting document is titled Supporting Academic Integrity: Ethical Uses of Artificial Intelligence in Higher Education Information Sheet and is available through the AICO website Academic Integrity & Artificial Intelligence. The document has a creative commons license, is intended to be a fluid document, and will be updated regularly due to the reality that changes to artificial applications will continue to unfold.

Completing this work as a group provided us the opportunity to tap into each group member’s expertise, and allowed us opportunities to debate, critically think, and work together toward a common effort. It also created stronger networks, relationships, and bonds between the working individuals and across the AICO membership since a concrete piece of work was completed in a collegial manner. The group work and its results (the information sheet) have spurred the creation of other group activities within organizations and across organizations in Ontario. It is important to remember that the very reasons we encourage group work with students are why we should be collaborating in group work to meet and support academic integrity efforts across our educational landscapes. Successful group work can serve as launching pads for new and effective groups as the skills and positive experiences foster the transference of these benefits. The labour associated with creating such a document was less daunting through our group efforts. This work has reminded all of us about the value of working together on important tasks.

References

Grizmek, V., Kinnamon, E., & Marks, M.B. (2020). Attitudes about classroom group work: how are they impacted by students’ past experiences and major? Journal of Education for Business, 95(7), 439-450.