This app generates Model Risk Management (MRM) reports for internal review using a mock RAG pipeline over a knowledge base and AI21 Maestro’s Validated Output to enforce report requirements.
References: see AI21 Maestro Validated Output Quick Start documentation: https://docs.ai21.com/docs/instruction-following-module.
- Uses AI21 Maestro’s Validated Output to validate and fix report outputs against explicit requirements
- Streamlit UI for interactive input of model metadata and report options
- Mock RAG (TF–IDF) over a local
data/knowledge_basefolder - Requirements scoring surfaced in the UI
- Download report as Markdown
- Python 3.10+
- An AI21 API key with access to Maestro (set
AI21_API_KEYin your environment)
- Clone/open this repo.
- Create and activate a virtualenv (recommended).
- Install dependencies:
pip install -r requirements.txt- Configure environment:
cp .env.example .env # edit .env and set AI21_API_KEY=...streamlit run app.pyOpen the provided local URL in your browser.
- The knowledge base under
data/knowledge_baseis indexed with TF–IDF for simple retrieval. - You provide model metadata (name, owner, purpose, algorithms, data sources, etc.).
- The app retrieves the most relevant KB chunks and builds an instruction for the LLM.
- It calls AI21 Maestro’s Validated Output with explicit requirements (structure, tone, content constraints, length, references) and a budget setting.
- The validated output and a per-requirement score summary are displayed.
- Add or update documents in
data/knowledge_base/and click Reindex in the UI. - Modify requirement definitions in
src/mrm_report.py. - Adjust retrieval parameters (top_k) in the UI.
- This demo is for internal experimentation only. Do not include PII or confidential production data.
- Always review generated content for regulatory compliance and internal policies before use.
- If you see authentication errors, ensure
AI21_API_KEYis set and valid. - If retrieval returns no results, confirm files exist under
data/knowledge_base.