A modular cognitive architecture for building self-aware AI agents. Chimera combines episodic memory, metacognitive self-reflection, tool use, and reinforcement learning into a unified framework — all running on free-tier APIs and local CPU models.
┌─────────────────────────────────────────────────┐ │ Project Chimera │ │ │ │ ┌───────────┐ ┌────────────┐ ┌──────────┐ │ │ │ Prometheus │ │ Narcissus │ │ RLHF │ │ │ │ Cognitive │ │ Self-Model │ │ Oracle │ │ │ │ Core │ │ + Metacog │ │ │ │ │ └─────┬──────┘ └──────┬──────┘ └────┬─────┘ │ │ │ │ │ │ │ ┌─────▼────────────────▼───────────────▼─────┐ │ │ │ Agent Loop │ │ │ │ perceive → think → act → remember → ... │ │ │ └─────┬───────────────────────────────┬──────┘ │ │ │ │ │ │ ┌─────▼──────┐ ┌─────────▼──────┐ │ │ │ Memory │ │ Tool Registry │ │ │ │ ┌─────────┐ │ │ ┌────────────┐ │ │ │ │ │ Working │ │ │ │ Web Search │ │ │ │ │ │ (deque) │ │ │ │ File I/O │ │ │ │ │ ├─────────┤ │ │ │ Therapy* │ │ │ │ │ │ Episodic │ │ │ │ Reflection │ │ │ │ │ │ (LanceDB)│ │ │ └────────────┘ │ │ │ │ └─────────┘ │ └────────────────┘ │ │ └─────────────┘ │ └─────────────────────────────────────────────────────┘ * therapy tools when used via Knight Medicare | Feature | How it works | Cost |
|---|---|---|
| Episodic Memory | LanceDB vector store + SentenceTransformers (all-MiniLM-L6-v2). Stores experiences, recalls by semantic similarity. | $0 (local disk) |
| Self-Awareness | Narcissus system: tracks cognitive states, detects biases, identifies stuck patterns via metacognitive observer. | $0 (in-memory) |
| Consciousness Simulation | Self-modeling engine builds a model of its own attention, confidence, and decision patterns over time. | $0 (in-memory) |
| RLHF | Reward model (distilbert) scores candidate responses. Oracle selects the best. Trains on preference data. | $0 (CPU training) |
| LLM Backend | Gemini 1.5 Flash via API (1,500 req/day free tier). Async + sync support. | $0 (free tier) |
| Tool Use | Extensible tool registry with JSON schemas. Built-in: web search, file system. Pluggable: therapy tools, custom tools. | $0 |
Total infrastructure cost: $0
src/chimera/ ├── cognitive_core/ # "Prometheus" — LLM abstraction layer │ ├── interfaces.py # CognitiveCore ABC │ ├── prometheus_core.py # Gemini API implementation │ ├── model.py # Local model architecture (JAX/Flax, future) │ └── data_loader.py # Data preprocessing (future) │ ├── agent/ # "Janus" — perceive→think→act loop │ ├── agent.py # Agent class (main orchestrator) │ ├── memory.py # VectorEpisodicMemory + WorkingMemory │ └── tool_user.py # Tool ABC, ToolRegistry, WebSearchTool, FileSystemTool │ ├── consciousness/ # "Narcissus" — self-modeling & metacognition │ ├── narcissus_core.py # SelfModelingEngine, MetacognitiveObserver, SelfSimulationFramework │ ├── integration.py # ConsciousnessIntegration (bridge to agent) │ └── conscious_agent.py # ConsciousnessAwareAgent (Agent + Narcissus combined) │ └── rlhf/ # Reinforcement Learning from Human Feedback ├── reward_model.py # RewardModel (distilbert fine-tuning via TRL) └── oracle.py # RLHFOracle (scores + selects best response) - Python 3.11+
- A Gemini API key (free: aistudio.google.com)
# Clone git clone https://github.com/LarytheLord/Project-Chimera.git cd Project-Chimera/agi-project # Install dependencies (pick one) pip install -r requirements-submodule.txt # lightweight, no RLHF poetry install # full install with RLHF + JAX # Set your API key export CHIMERA_LLM_API_KEY="your_gemini_api_key"from chimera.cognitive_core.prometheus_core import PrometheusCognitiveCore from chimera.agent.agent import Agent from chimera.agent.tool_user import ToolRegistry, WebSearchTool, FileSystemTool # Initialize core = PrometheusCognitiveCore() tools = ToolRegistry() tools.register_tool(WebSearchTool()) tools.register_tool(FileSystemTool()) agent = Agent(cognitive_core=core, tool_registry=tools, db_path="./chimera_db") # Run — agent will perceive, think, act, and remember in a loop agent.run_main_loop({"task": "Research the latest developments in cognitive architectures"})from chimera.consciousness.conscious_agent import ConsciousnessAwareAgent agent = ConsciousnessAwareAgent( cognitive_core=core, tool_registry=tools, db_path="./chimera_db", ) # Enable self-reflection agent.enable_self_reflection() # The agent now tracks its own cognitive states, detects biases, # and uses metacognitive insights to improve decision-making agent.run_main_loop({"task": "Solve a complex problem while monitoring your own reasoning"}) # Inspect the agent's self-model print(agent.get_self_model())from chimera.rlhf.oracle import RLHFOracle # Train a reward model first (see scripts/train_reward_model.py) oracle = RLHFOracle(model_path="./reward_model") agent = ConsciousnessAwareAgent( cognitive_core=core, tool_registry=tools, db_path="./chimera_db", rlhf_oracle=oracle, num_candidates=3, # generate 3 candidates, oracle picks the best )The LLM abstraction layer. Currently wraps Gemini 1.5 Flash via HTTP API. Implements the CognitiveCore ABC so you can swap in any LLM backend.
from chimera.cognitive_core.prometheus_core import PrometheusCognitiveCore core = PrometheusCognitiveCore() # reads CHIMERA_LLM_API_KEY from env response = core.generate_response({"text_data": "What is consciousness?"})Perceive → Think → Act loop with vector memory and tool use.
Memory:
WorkingMemory— bounded deque (last 20 items), fast in-memory contextVectorEpisodicMemory— LanceDB vector store with SentenceTransformer embeddings. StoresExperience(observation, action, outcome)tuples. Recalls by semantic similarity.
from chimera.agent.memory import VectorEpisodicMemory, WorkingMemory, Experience memory = VectorEpisodicMemory(db_path="./my_db") memory.remember(Experience( observation={"input": "user question"}, action={"tool": "web_search", "query": "..."}, outcome={"result": "..."} )) # Semantic recall relevant = memory.recall("similar question", top_k=5)Tools:
ToolABC withname,description,get_schema(),__call__()ToolRegistrymanages tools, generates JSON schemas for the LLM- Built-in:
WebSearchTool(DuckDuckGo + scraping),FileSystemTool(read + list)
from chimera.agent.tool_user import Tool class MyTool(Tool): @property def name(self): return "my_tool" @property def description(self): return "Does something useful" def get_schema(self): return {"type": "object", "properties": {"input": {"type": "string"}}} def __call__(self, input: str): return f"Result for {input}"Self-modeling, metacognition, and cognitive state tracking.
Components:
SelfModelingEngine— tracks attention patterns, capability assessments, bias identificationMetacognitiveObserver— analyzes thought processes, detects biases (e.g., confirmation bias from repeated decisions), suggests optimizationsSelfSimulationFramework— simulates proposed cognitive changes before applying themNarcissusConsciousnessCore— orchestrates all three, records cognitive states
from chimera.consciousness.narcissus_core import NarcissusConsciousnessCore, CognitiveState narcissus = NarcissusConsciousnessCore( cognitive_core=core, memory_db_path="./narcissus_db" ) # Record a cognitive state state = narcissus.record_cognitive_state( thought_process="Analyzing user's emotional state", attention_weights={"emotion": 0.6, "context": 0.3, "history": 0.1}, decision_path=["assess_mood", "select_intervention"], confidence=0.75, emotional_state={"empathy": 0.8, "concern": 0.6}, memory_context=["previous sessions"], processing_load=0.5, ) # Introspect insights = narcissus.perform_introspective_analysis() # → {self_model_snapshot, metacognitive_insights, suggested_improvements, self_awareness_metrics}Train a reward model on human preferences, then use it to select better responses.
# 1. Collect preferences python scripts/collect_preferences.py # 2. Train reward model python scripts/train_reward_model.py # 3. Use in agent (see "Run with RLHF" above)Chimera powers the AI therapy backend for Knight Medicare, a mental healthcare platform.
Patient → Knight Medicare (Next.js) → POST /api/therapy → chimera-bridge (FastAPI, port 8100) → TherapyAgent.process_message() 1. PERCEIVE — WorkingMemory + VectorEpisodicMemory recall 2. PATTERNS — Narcissus MetacognitiveObserver 3. ASSESS — Gemini classifies mood, selects tool 4. TOOL — therapy tool (CBT, journaling, breathing, safety plan) 5. RESPOND — Gemini generates therapeutic response 6. RECORD — LanceDB store + Narcissus CognitiveState The chimera-bridge/ FastAPI service lives in the KM repo and wraps Chimera's modules for therapy-specific use. Chimera itself remains a general-purpose cognitive architecture.
See KM Discussion #33 for integration details.
- Prometheus cognitive core (Gemini API)
- Agent perceive→think→act loop
- Episodic memory (LanceDB + SentenceTransformers)
- Working memory (bounded deque)
- Tool registry + web search + file system tools
- Narcissus consciousness system (self-modeling, metacognition, simulation)
- Consciousness-aware agent
- RLHF reward model + oracle
- Knight Medicare therapy integration
-
chimera/__init__.pyfix for submodule compatibility (#9) - Standalone CLI entry point (#15)
- Local emotion detection via HuggingFace (#16)
- Reflexion self-critique + Constitutional AI guardrails (#17)
- Local LLM fallback — SmolLM2 GGUF on CPU (#18)
- ACT-R memory decay + temporal validity (#20)
- Feed consciousness insights back into prompts (#21)
- Three-tier memory (semantic + episodic + procedural)
- DSPy prompt optimization
- Therapist RLHF feedback loop
See Discussion #19 for the full evolution roadmap.
# Run tests cd agi-project poetry run pytest tests/ # Verify core imports python -c " from chimera.agent.memory import VectorEpisodicMemory, WorkingMemory from chimera.consciousness.narcissus_core import NarcissusConsciousnessCore from chimera.cognitive_core.prometheus_core import PrometheusCognitiveCore print('All imports OK') "# In the KM repo: git submodule update --init lib/chimera cd lib/chimera && git checkout v0.2.0-km-ready- Create a branch off
master - Make changes in
agi-project/src/chimera/ - Run tests:
poetry run pytest - Tag a release:
git tag v0.x.x-km-ready - Push:
git push origin master --tags
- Abid (LarytheLord) — Architecture, KM integration, project lead
- Prit (Prit-P2) — Chimera core modules, Python specialist
MIT