Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
- Updated
Nov 25, 2025 - Python
Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
An auditing framework to evaluate LLMs in local government reporting. Compares AI-generated headlines and topic prioritization against professional journalistic standards. Submitted to CHI 2026.
Recon-Level Audit of Claude 4 – Obfuscated, Ethical & Technically Precise
🐙 Ethical red-team audit of Claude 4 with clear introspection and policy visibility. Includes JSON data and Python tooling; Mermaid diagrams map model behavior.
Add a description, image, and links to the llm-audit topic page so that developers can more easily learn about it.
To associate your repository with the llm-audit topic, visit your repo's landing page and select "manage topics."