Skip to content

safeguarding First in Education: Request for Documentation on Non‑Reflective Logic Requirements for AI #42094

@muzzammilkatelia-eng

Description

@muzzammilkatelia-eng

Code of Conduct

What article on docs.github.com is affected?

There is currently no dedicated article on docs.github.com that covers:

What changes are you suggesting?

I am proposing the addition of new documentation (or an expansion of existing AI‑related documentation) to cover safeguarding‑first requirements for AI systems used in K12 education. Specifically:

  1. A section explaining “non‑reflective logic” for AI in education
    This includes:
  • Emotional awareness allowed
  • Emotional reflection prohibited
  • No mirroring of student emotions
  • No companionship or attachment language
  • No personality inference
  • Stateless operation
  1. Clear developer guidance
    How to implement:
  • Guardrails
  • Prompting patterns
  • Safety boundaries
  • Role‑bounded behaviour
  • Compliance‑friendly defaults
  1. Example patterns
    Including:
  • Allowed vs. disallowed responses
  • Safeguarding‑first prompt structures
  • Code examples for filtering reflective logic
  1. A short policy note
    Explaining why these constraints matter in K12 environments.

WHY SHOULD THE DOCS BE CHANGED?
AI is increasingly being integrated into educational tools, including those built and deployed through GitHub workflows, GitHub Actions, and GitHub‑hosted AI services. Developers need clear, accessible guidance on:

  • Safeguarding expectations
  • Emotional safety boundaries
  • How to avoid reflective logic
  • How to prevent AI from forming emotional reciprocity with students
    Without this, developers may unintentionally deploy AI behaviour that is inappropriate or unsafe for children.
    This documentation would support:
  • EdTech developers
  • Open‑source contributors
  • Schools using GitHub for AI projects
  • Organisations building safeguarding‑first AI tools

EXPECTED OUTCOME

  • A new or updated documentation page that outlines best‑practice guidelines for building safeguarding‑first AI systems.
  • Clear examples that developers can follow.
  • Increased safety and consistency across AI tools built using GitHub’s ecosystem.
  • Better alignment with K12 safeguarding expectations and regulatory requirements.

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    contentThis issue or pull request belongs to the Docs Content teamtriageDo not begin working on this issue until triaged by the team

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions