By Alexander Johnson, Chief Product Officer, UnconstrainED

In today’s rapidly evolving landscape, Artificial Intelligence (AI) is becoming increasingly integrated into education, bringing about exciting possibilities while raising important questions about ethics and transparency. As educators, we are on the front lines, tasked with not only harnessing the power of AI but also guiding our students in its responsible and ethical use.

At UnconstrainED, we believe that technology should empower, not complicate, the educational process. This belief sparked the idea for the AI Usage Labeler, a Chrome extension designed to facilitate transparency in AI use within academic settings.

The Why: Addressing a Critical Need

The idea for the AI Usage Labeler stemmed from a fundamental challenge faced by educational institutions worldwide: How can we promote accountability and clarity in the use of AI tools in education? Research indicates that building trust through transparency is essential for successful AI integration in educational settings (Akgun & Greenhow, 2022).

As AI tools grow in popularity, students are using AI to assist with research, write papers, and create content. While these tools have tremendous potential to improve student outcomes, they need to be used responsibly and ethically. Modern frameworks for AI in education emphasize keeping human judgment at the center of decision-making processes (Holmes & Porayska-Pomsta, 2022).

Our Goal: Empowering Transparency and Trust

Our primary goal in developing the AI Usage Labeler was to empower both students and educators to engage with AI in an open and honest manner. Recent studies highlight the importance of having AI systems that are inspectable and explainable, with clear human oversight (UNESCO, 2021).

With this tool, we aim to:

  1. Promote Ethical Use: Ensure students use AI tools responsibly
  2. Ensure Compliance: Help educators enforce AI usage guidelines easily
  3. Encourage Honesty: Create a space that fosters academic integrity
  4. Streamline Workflow: Simplify the process of labeling AI use in academic work

Key Features of the AI Usage Labeler

The AI Usage Labeler has been designed with flexibility and ease of use in mind, incorporating best practices from established frameworks for ethical AI in education (ISTE, 2024):

Customizable Levels

  • Schools can define their own AI usage levels
  • Supports simple stoplight systems (red/yellow/green) or more nuanced approaches
  • Aligns with existing school policies and visual guides

Template Support

  • Save commonly used tool/description combinations
  • Streamline the labeling process for repeated use cases

Policy Export & Sync

  • Export custom policies as JSON files
  • Enable school or district-wide settings through central policy servers
  • Maintain consistency across educational institutions

Alignment with Educational Best Practices

Our development process was guided by several key principles for ethical AI in education:

Human-Centered Design

The tool ensures that AI usage starts with human intention and ends with human reflection and validation, following established guidelines for educational technology development (TeachAI, 2023).

Transparency and Trust

The extension embodies core principles for ethical AI use, including:

  • Clear notice and explanation
  • Human alternatives and considerations
  • Strong data privacy and security measures

Academic Integrity

The tool supports the growing need for clear AI citation practices in academic work, as highlighted by major style guides including MLA, APA, and Chicago Style.

Real-World Applications

Schools are using the AI Usage Labeler in various ways:

For Teachers

  • Label assignments with expected AI usage levels
  • Model transparent AI use in lesson planning
  • Facilitate discussions about appropriate AI use

For Students

  • Clearly disclose AI use in assignments
  • Develop habits of ethical technology use
  • Learn appropriate AI citation practices

For Administrators

  • Implement consistent AI policies
  • Track and monitor AI use patterns
  • Support professional development initiatives

The Development Journey

In creating this tool, we embraced AI-powered development while maintaining our commitment to transparency. Our process utilized:

  • Claude for complex code analysis
  • Gemini Advanced for rapid prototyping
  • OpenAI for testing and refinement
  • Cursor as our AI-enhanced IDE

This approach exemplifies how AI can be used ethically and effectively in development while maintaining human oversight and decision-making.

We will be sharing more details about the development journey, and the influence that AI is having on product creation, in a future blog post.

Looking Ahead

While the AI Usage Labeler is feature-complete, we continue to refine aspects such as central policy synchronization. We’re particularly focused on:

  • Enhancing policy distribution mechanisms
  • Improving integration capabilities
  • Responding to educator feedback
  • Supporting emerging AI use cases

If you would like to get access to the extension, you can install it through the Chrome Web Store, here.

If you are an IT admin or the person responsible for ensuring policy alignment within your organization, send me an email and I’ll add you to the database to be able to centrally manage and sync your school’s policy.

Email: alex@unconstrained.work

Walking the Talk: AI Usage in This Blog Post

In the spirit of transparency and ethical AI use that we advocate for, I want to acknowledge that this blog post was developed with the assistance of AI tools, specifically Anthropic’s Claude and Google’s Gemini. Through an iterative process, these AI assistants helped with structuring the content, refining the language, and ensuring comprehensive coverage of the topic while maintaining alignment with established frameworks for ethical AI use in education. Claude was particularly helpful in expanding the initial draft and incorporating academic citations, while Gemini assisted with refining and focusing the content.

This kind of transparent disclosure is exactly what we hope to encourage in educational settings – acknowledging AI assistance while maintaining human oversight and decision-making throughout the creation process.


References:

Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics, 2, 431–440.

Holmes, W., & Porayska-Pomsta, K. (Eds.) (2022). The ethics of artificial intelligence in education. Routledge.

International Society for Technology in Education (ISTE). (2024). AI in Education Guidelines.

TeachAI. (2023). AI Guidance for Schools Toolkit.

UNESCO. (2021). AI and education: Guidance for policy-makers.

Leave a Reply

Discover more from UnconstrainED

Subscribe now to keep reading and get access to the full archive.

Continue reading