Complete Story
11/10/2025
Who’s Responsible for Defining Responsible Use of Artificial Intelligence on a College Campus?
by Adrian Anderson
Image credit: Author using ChatGPT
Higher education faculty and staff on college campuses around the world are facing a new dilemma as colleges and universities begin to embrace the use of artificial intelligence with open arms and open wallets. Institutions of higher learning are entering into massive multimillion dollar partnerships with artificial intelligence companies such as OpenAI, creators of ChatGPT (Arizona State University, 2025) and Microsoft, creator of Copilot. In other examples on a smaller scale, but equally impactful, individual units within university or college systems are purchasing licensing agreements to use AI-powered products like Grammarly (Grammarly, Inc 2025). These partnerships likely serve 3 goals: data privacy, operational efficiency, and innovation.
What I and others have noticed more than the named goals above is a glaring omission of a new commitment to academic integrity in the wake of these new partnerships and initiatives. Announcements for these partnerships receive a lot of fanfare, (such as the announcement of a partnership with OpenAI with the University of South Carolina in August of 2025) but it is the other AI, academic integrity, that feels like an afterthought in some of these instances. It very well may be that, but there is still time to fix it.
In my most recent role as Assistant Director for Academic Integrity, I was responsible for hearing cases of alleged violations of the Honor Code at the University of South Carolina. I have heard from both faculty and students that they feel an internal conflict over the adoption of artificial intelligence by colleges. Students have asked me how is it that the university can use ChatGPT or encourage its use and yet also send students to offices like mine for violating the Honor Code. This sentiment is felt by faculty as well, which in some cases has caused faculty not to report instances of possible academic misconduct, for a lack of understanding as to what jurisdiction an office of academic integrity may have at an institution where artificial intelligence use is being encouraged from its highest offices.
This raises a big question – who is responsible for defining responsible use of artificial intelligence on a college campus?
The answer much like all things with artificial intelligence is nuanced, as I believe this responsibility is shared among faculty, students, and staff.
Faculty
For decades, higher education has thrived due to the steadfast commitment to fostering an environment ripe for learning by university faculty. What is interesting in my conversations with faculty regarding the use of artificial intelligence is their uncertainty as to what their role should be. Some professors, as I am sure the administrators seeking out these contracts would hope for, see their role as preparing their students for careers in which artificial intelligence skills will be in high demand. These faculty are asking questions about AI integration into their course work, while faculty on the other end of the spectrum see themselves as the ultimate gatekeeper of academic integrity. These faculty were quick to create harsh syllabus statements intricately detailing how and why artificial intelligence was not to be used in their course, likening any use of artificial intelligence to plagiarism.
Faculty have the trust of their students, and this can make them the best to help educate on responsible use of artificial intelligence, but it is also a barrier. The use of artificial intelligence in lesson planning or the creation of course materials opens a professor up to new and many unknown issues that they would experience in real time. Being transparent with students about the successes and challenges of incorporating AI could spark thoughts in students about how they might govern themselves when debating on using something AI generated. Artificial intelligence is still unpredictable and potentially inaccurate, which gives some faculty pause on implementing its use broadly.
Students
Mutual expectations shared amongst students may be one of the most promising avenues for fast adoption and social norming surrounding what is deemed responsible and irresponsible use of artificial intelligence. The University of South Carolina’s 'Carolina Experience' office has hosted a Mutual Expectations: Artificial Intelligence on College Campus program for the past 3 years. During these programs, students, faculty, and staff come together and discuss from their own perspectives how they feel artificial intelligence should be used in campus life. In my participation as a facilitator of this program, I have noticed a difference from year to year in students’ perceptions on the positive uses of artificial intelligence in their coursework and their daily life. As AI-competency increases, more students have expressed their desire to use AI for assistance in their coursework and less concern about being told their use of artificial intelligence is cheating. As they think critically, they have come to recognize how AI can be an assistant in learning and not a replacement. Students’ mindsets have evolved from essay generation to study guide creation. There does still seem to be a hesitation from students to share how they are using it, because they are in some cases fearful of how their professors would respond, particularly when their faculty have taken hard stances on AI from a cheating perspective, but have not found ways to teach students how AI could be beneficial or acceptable to use in their course or perhaps in their discipline.
Staff
As a staff member myself, I find that I have the strongest opinions on a duty to educate outside of the classroom and therefore a duty to educate on responsible use of artificial intelligence; however, it very much depends on the specific role of the staff member as to how they may be received on this topic. Career center staff discussing responsible AI use with students may be received differently than a member of the Office of Academic Integrity or the advisor to a digital media club.
There is a unique area that is budding here. Offices responsible for information technology have an opportunity to take the lead in advancing artificial intelligence education.
I know this firsthand, as I have recently accepted a new role within IT as an Artificial Intelligence Educator. This role is a first of its kind at the University of South Carolina and promising. Most importantly, it is an example of a university seeing an area of need and elevating a group of people poised to be an excellent source of trusted information to discuss evolving technology. It is not traditionally an office or department seen as a collection of educators, rather, IT staff are seen as engineers, researchers, and technical problem-solvers. While all true, this group likely has a wealth of knowledge on the topic that the groups above are desperately seeking guidance on.
An argument as to who is responsible for defining responsible use of artificial intelligence will likely be had for years to come; however, I believe it is a coalition of the willing across college campus worldwide who will need to work together to get this done right. Likely institutions will take this on and have been confronting it from many angles. The role of AI Educator, official or otherwise, could exist as a standalone position in IT or an informal one from someone in Student Life. It is my goal as a lifelong learner, educator, and genuinely curious person to help in any way I can in this effort, and my hope is that others who find themselves in a position of influence use that influence to participate and share what responsible AI looks like. The definition will only be as good as those who have contributed to it.
References
Arizona State University. (n.d.). ASU and OpenAI collaboration. Retrieved September 21, 2025, from [https://ai.asu.edu/openAI]
Grammarly, Inc. (2025). Grammarly for Education. Retrieved September 24, 2025,from https://www.grammarly.com/edu(grammarly.com)
OpenAI. (2025). Responsible use of AI [Digital illustration]. ChatGPT. https://chat.openai.com/
University of South Carolina, Carolina Experience. (n.d.). Mutual expectations. Retrieved September 21, 2025, from https://sc.edu/about/offices_and_divisions/carolina-experience/mutual-expectations/ (sc.edu)
University of South Carolina, Office of the Provost. (2025, August 11). A message from Provost Fitzpatrick regarding OpenAI. Retrieved September 21, 2025, from https://sc.edu/about/offices_and_divisions/provost/news_events/provost_communications/fitzpatrick-open-ai.php
Adrian Anderson, M.A., is an Artificial Intelligence Educator at the University of South Carolina, where he develops AI literacy programs and supports campus-wide initiatives promoting ethical and responsible technology use. Formerly the Assistant Director of Academic Integrity at USC, he brings deep experience in student conduct, integrity education, and faculty engagement to his current work helping institutions navigate the evolving intersection of AI and academic integrity.
Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!

