“And now, Margaret is here to talk about everyone’s favor topic, academic integrity.” This was the introduction I received at a recent meeting of senior administrators at my university.
There was nothing unusual in these words. Academic integrity is widely viewed by faculty, administrators, students and parents as a distasteful problem, something we would wish away if we could. I admit there are moments I wish this myself.
But after more than a decade overseeing academic integrity policy and case management in collaboration with wonderful colleagues, I have arrived at a more radical and more practical conclusion: The breakneck expansion of generative artificial intelligence offers a rare opportunity – and an ethical imperative - to radically transform the approach most American colleges and universities take to academic integrity.
This is not a one-size-fits all proposition. Rather, it is a call for institutions to evaluate the resources they devote to promoting academic integrity and whether they are using these resources as effectively as possible for what we at Syracuse University have dubbed Teaching and Learning in the Age of Artificial Intelligence.
Why would such a (re)evaluation be necessary? What would it look like?
Let’s consider the experience of a typical first-year college student who arrived on campus last month. Many such students are taking at least one course that relies exclusively on a single textbook. Said text likely excludes references entirely (think calculus or chemistry) or buries them in final pages before the index (e.g. sociology or psychology). In many courses without textbooks, first-semester students encounter assignments focusing on individual reflection and expression through journal or personal essay writing (no references needed), and yet other courses in which assignments involve analysis of a prescribed set of assigned readings and so do not require citation. All this said, there is a good chance that this same student will be assigned at least one research paper in which in-text citation, paraphrasing, summarizing and a reference list are standard requirements. Their professor may explain that these requirements are expected in academic writing. The professor may devote class time to discussion of appropriate use of sources and even introduce students to a required citation format, such MLA or APA.
From the perspective of a faculty member steeped in teaching and perhaps actively engaged in research, this should suffice.
But a student – especially a first-year student – may be mystified, especially in the era of artificial intelligence (AI). Even before generative AI landed in our inboxes and search engines last winter, many students were confused by faculty members’ seeming obsession with citation. All too often, students interpreted training in citation styles as evidence that faculty cared about formatting and punctuation in citation rather than the basic principle of conveying to a reader (or viewer or listener) that you, the author (presenter or reader) have drawn upon another person or entity’s ideas or creative work and providing sufficient detail to identify and review this source.
With the arrival of AI, even experienced college students are genuinely puzzled by the widely varied AIAI expectations they encounter not only across different courses taught by different faculty but also across assignments and exams in the same course.
“Hello Professor,” a student wrote in a recent email communication about academic integrity. “In my writing class this semester we learned about how AI can be a helpful tool when writing such as to help outline your thoughts… I take academic integrity very seriously, and I was not aware this use of AI” was prohibited in your course.
“I’ve used Grammarly for years,” another student told me. “Suddenly, GrammarlyGO [a generative AI tool] showed up inside my Grammarly account. It kind of ambushed me.”
This puzzlement isn’t surprising. It’s all but impossible these days to open a browser without encountering an AI product on offer or one that’s already built in. So it should not come as any surprise that ICAI members and institutions are grappling with how to respond.
More or better enforcement is not a promising approach. As Turnitin acknowledges on its website: “Our AI writing detection model may not always be accurate (it may misidentify both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any academic misconduct has occurred.”
The risk of false negatives and false positives raise concerns about fairness in enforcement of academic integrity policies. But ethical concerns over false positives are especially troubling because detection systems and individual faculty may be more likely to misidentify simple or unsophisticated English as AI generated, according to a published analysis by Stanford faculty and graduate students. That could put international students who are not native English speakers and native speakers who attended less well-resourced high schools, including many first generation, low-income and under-represented minority students, at greater risk of being falsely reported.
To be clear, I am not calling for an end to reporting suspected academic integrity violations. Rather, I believe it is time to evaluate what share of our institutional resources are devoted to enforcement and what share to education – and for institutions with limited resources to consider whether they it makes sense to shift more toward education.
We need to make a case to students for the value of our academic culture. This means explaining that academic research is distinctive in that it prizes tracing the origin of the ideas that made new research possible almost as much as the new research findings themselves. This emphasis on past research and on potential future research made iPhones and COVID-19 vaccines available, as well as countless humanistic and social science discoveries. This focus on the research arc is what sets academic writing apart from news articles, social media, essays, inter-office memos, and blogs like this one, even though all these forms of writing are widespread in higher education, including among the reading and writing assignments college students encounter daily.
Suspected academic integrity violations hold up a mirror to what can be a cavernous gap between student and faculty understanding of academic expectations. The good news is that we already have the tool we need to explain these expectations to students: course-, assignment-, and exam- specific learning objectives. Academic expectations vary across courses and sometimes across assignments and exams within the same course because the learning objectives for those courses and assessments differ. Professor Z requires her students to use an AI tool to draft their first essay because the learning objective of this assignment entails evaluating bias in AI-generated writing. She prohibits her students from using AI tools to craft their second essay because it is designed to help students develop their voice as authors.
At Syracuse, our student academic support center and faculty center for teaching and learning began collaborating last spring to encourage faculty to clearly explain the nature and rationale of their academic integrity expectations to students and to encourage students to ask questions when expectations are unclear to them. This effort includes broadened syllabus language conveying students’ responsibility for inquiring about the permissibility of using AI tools and text and video advice to support faculty in considering when and how to incorporate, limit or prohibit use of AI tools in alignment with their course learning objectives. We have established a related faculty working group and are partnering with stakeholders across campus this academic year as we continue evaluating how we can best use our resources to support teaching and learning in the age of artificial intelligence.
References:
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819.
Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!