Complete Story
 
Aligning AI with ICAI’s Fundamental Values

11/24/2025

Aligning AI with ICAI’s Fundamental Values

by Camilla Roberts

Image credit: Copilot, edited in Canva

Artificial Intelligence (AI) is reshaping education at an unprecedented pace. From adaptive learning platforms to generative writing assistants, automated grading, or predictive advising, AI promises efficiency, personalization, and accessibility. Yet just as Superman’s uncle stated, “With great power comes great responsibility.” How can we ensure that AI enhances learning while remaining true to ICAI’s six fundamental values of academic integrity?

I assume readers of this blog are familiar with the six core values of academic integrity: honesty, trust, fairness, respect, responsibility, and courage. It’s important for us to broaden how we think and talk about these values. When we do, it helps guide our decisions about using AI ethically and effectively in our classrooms. That way, educators can take advantage of new technology while still protecting the integrity that makes meaningful learning possible.

Honesty

Honesty is the foundation of academic integrity. As AI tools become part of daily academic life, transparency about how they are used and how they influence teaching and learning is essential.

Students deserve to know when AI is involved in grading, evaluating, or shaping their educational experiences. Faculty also need clear expectations regarding when student use of AI is permitted and what constitutes misuse. Transparency reduces confusion and fosters integrity.

Example:
If an essay is partially or fully evaluated by an AI scoring system, instructors disclose this to students and explain the criteria the system uses.

Strategies:

  • Publish clear guidelines on where AI is used, including grading, advising, and content generation.
  • Require students to disclose AI use in coursework.
  • Provide understandable summaries of algorithmic logic behind AI-based evaluation tools.

By embracing transparency, institutions strengthen honesty and create environments where AI supports rather than obscures academic work.

Trust

Trust grows when institutions and educators demonstrate reliability, fairness, and clarity. When AI enters the learning environment, trust requires technological dependability and transparent decision-making.

Example:
A faculty member uses an AI-driven tutoring tool in their course and openly shares with students how the tool works, what its limitations are, and how their data is handled. Students feel comfortable using it because expectations and processes are clear.

Strategies:

  • Conduct regular accuracy and bias audits of AI tools.
  • Communicate results openly and accessibly to faculty, staff, and students.
  • Offer training so faculty and students understand how AI influences learning and assessment.
  • Set and follow consistent expectations for AI use in courses.

AI should never be a confusing or hidden process. When institutions build trust, they encourage collaboration, openness, and ethical engagement with technology.

Fairness

Fairness requires equitable treatment, transparent expectations, and consistency. AI systems can carry hidden biases based on how they were trained or designed. Without careful oversight, these tools can unintentionally disadvantage specific groups or amplify inequities.

Example:
An adaptive learning platform trained primarily on native English speakers evaluates non-native speakers as lower performing, which limits their access to advanced modules.

Strategies:

  • Conduct formal bias audits for all educational AI systems.
  • Diversify datasets used to train algorithms.
  • Ensure equitable access to AI tools across socioeconomic backgrounds.
  • Provide alternative ways for students to demonstrate learning that are not based solely on AI-generated evaluations.

Fairness also means designing assignments and assessments that measure what students are expected to learn, even in an AI-enabled environment.

Respect

Respect in education involves valuing diverse voices, maintaining meaningful interaction, and honoring the work of others. AI can enrich learning, but it cannot replace the human elements that define traditional scholarship.

Example:
AI generates personalized study plans or feedback suggestions, but instructors maintain full authority over evaluation, curriculum decisions, and final grades.

Strategies:

  • Treat AI as a supportive tool rather than a replacement for human expertise.
  • Protect student privacy by following data protection standards.
  • Teach students to respect intellectual contributions by citing AI-assisted work when appropriate.
  • Encourage active engagement, reflection, and discussion even when AI offers easier pathways.

Respect ensures that learners remain central to education and that technology remains a tool rather than a substitute for authentic thinking.

Responsibility

Responsibility requires that individuals and institutions make informed and ethical choices about AI. This includes understanding AI’s limitations, preventing misuse, and ensuring that tools support learning.

Example:
A college forms an AI in Education committee that develops responsible-use guidelines, offers training, and addresses concerns around misuse or inequitable implementation.

Strategies:

  • Develop and regularly update clear policies on acceptable AI use.
  • Educate students about academic integrity in the age of AI.
  • Monitor for misconduct such as AI-generated plagiarism or overdependence.
  • Encourage students to use AI tools ethically, with proper attribution and awareness of risks.

Responsibility is both individual and collective. Students, faculty, and administrators must work together to ensure that AI supports academic integrity rather than undermines it.

Courage

Courage is essential when navigating rapidly evolving technologies. Adopting AI ethically means making principled choices, even when they are difficult or unfamiliar.

Example:
An instructor discovers that many students are quietly relying on an AI writing assistant in ways that undermine the assignment’s learning goals. Rather than ignoring the issue, the instructor redesigns the assignment, speaks openly with the class about the concerns, and invites students to help create clearer guidelines for AI use.

Strategies:

  • Encourage faculty, staff, and students to speak openly when AI practices conflict with institutional values.
  • Reward integrity-driven decisions even when they require extra time or resources.
  • Pilot new AI tools gradually to identify ethical or equity concerns before wide implementation.
  • Promote a campus culture where thoughtful experimentation and honest feedback are welcomed.

Courage ensures that innovation does not outpace ethical reflection and that institutions remain committed to integrity.

AI is here to stay, but academic integrity must lead the way. Faculty, students, administrators, and institutions share the responsibility to ensure that technology enhances learning and does not erode trust.

Start today:

  • Review your institution’s AI policies.
  • Advocate for transparency and fairness in educational technology tools.
  • Engage your community in conversations about ethical AI integration.
  • Align classroom and institutional practices with ICAI’s Fundamental Values.

When we lead with the fundamental values, education and integrity can be strengthened rather than compromised.

Reference

International Center for Academic Integrity [ICAI]. (2021). The Fundamental Values of Academic Integrity. (3rd ed.).  www.academicintegrity.org/the-fundamental-values-of-academic-integrity


 

Camilla Roberts is President Emeritus for ICAI and serves as the Director of the Honor and Integrity System at Kansas State University.

 

The authors' views are their own.

Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!‌ 

 

Printer-Friendly Version

0 Comments