-
Notifications
You must be signed in to change notification settings - Fork 65.3k
Open
Labels
contentThis issue or pull request belongs to the Docs Content teamThis issue or pull request belongs to the Docs Content teamtriageDo not begin working on this issue until triaged by the teamDo not begin working on this issue until triaged by the team
Description
Code of Conduct
- I have read and agree to the GitHub Docs project's Code of Conduct
What article on docs.github.com is affected?
There is currently no dedicated article on docs.github.com that covers:
- AI behaviour constraints for safeguarding in education
- Non‑reflective logic requirements
- Emotional‑awareness‑without‑emotional‑reflection patterns
- Guidance for developers building AI tools for K12 environments
This request relates broadly to the following areas: - https://docs.github.com/en/actions
- https://docs.github.com/en/copilot
- https://docs.github.com/en/rest
- https://docs.github.com/en/education
The gap appears across multiple sections rather than a single article.
What changes are you suggesting?
I am proposing the addition of new documentation (or an expansion of existing AI‑related documentation) to cover safeguarding‑first requirements for AI systems used in K12 education. Specifically:
- A section explaining “non‑reflective logic” for AI in education
This includes:
- Emotional awareness allowed
- Emotional reflection prohibited
- No mirroring of student emotions
- No companionship or attachment language
- No personality inference
- Stateless operation
- Clear developer guidance
How to implement:
- Guardrails
- Prompting patterns
- Safety boundaries
- Role‑bounded behaviour
- Compliance‑friendly defaults
- Example patterns
Including:
- Allowed vs. disallowed responses
- Safeguarding‑first prompt structures
- Code examples for filtering reflective logic
- A short policy note
Explaining why these constraints matter in K12 environments.
WHY SHOULD THE DOCS BE CHANGED?
AI is increasingly being integrated into educational tools, including those built and deployed through GitHub workflows, GitHub Actions, and GitHub‑hosted AI services. Developers need clear, accessible guidance on:
- Safeguarding expectations
- Emotional safety boundaries
- How to avoid reflective logic
- How to prevent AI from forming emotional reciprocity with students
Without this, developers may unintentionally deploy AI behaviour that is inappropriate or unsafe for children.
This documentation would support: - EdTech developers
- Open‑source contributors
- Schools using GitHub for AI projects
- Organisations building safeguarding‑first AI tools
EXPECTED OUTCOME
- A new or updated documentation page that outlines best‑practice guidelines for building safeguarding‑first AI systems.
- Clear examples that developers can follow.
- Increased safety and consistency across AI tools built using GitHub’s ecosystem.
- Better alignment with K12 safeguarding expectations and regulatory requirements.
Additional information
No response
Metadata
Metadata
Assignees
Labels
contentThis issue or pull request belongs to the Docs Content teamThis issue or pull request belongs to the Docs Content teamtriageDo not begin working on this issue until triaged by the teamDo not begin working on this issue until triaged by the team