Entelgia is an experimental multi-agent AI architecture.
It studies how persistent identity, internal conflict dynamics, and behavioral regulation can emerge through long-term memory and structured dialogue.
Use Entelgia if you want agents that evolve an internal identity β not just follow prompts.
Unlike stateless chatbot systems, Entelgia maintains an evolving internal state, allowing identity, memory, and reflective behavior to develop over time.
Entelgia sits between agent engineering and cognitive architecture research, exploring how internal structure shapes agent behavior.
LLM + Persistent Memory + Psychological Drives + Observer Regulation --> Dialogue-governed agents Most agent systems optimize outputs. Entelgia explores how internal structure regulates behavior over time.
Instead of external guardrails, agents develop regulation through:
- memory continuity
- internal conflict
- observer feedback loops
π Full Professional Demo: Entelgia Full Demo
Entelgia is a Research Hybrid β combining experimental AI research with a stable engineering foundation.
It explores new multi-agent and cognitive ideas while remaining usable and reliable in real projects.
Some components evolve rapidly, but changes are introduced carefully to preserve stability.
The project welcomes both researchers and developers building persistent, reflective AI agents.
- Python 3.10+
- LLM backend β choose one:
- Ollama (local, free) β requires ~8GB+ RAM and a 7B+ model download
- Grok (xAI cloud) β requires
GROK_API_KEYand internet access - OpenAI (cloud) β requires
OPENAI_API_KEYand internet access - Anthropic (cloud) β requires
ANTHROPIC_API_KEYand internet access
- At least one supported model (see backend-specific sections below)
- 8GB+ RAM recommended for Ollama (16GB+ for larger models); not required for cloud backends
For the complete dependency list, see requirements.txt.
Before installing, it helps to understand the two ways Entelgia can talk to a language model: locally via Ollama or remotely via a cloud API.
Ollama is a free, open-source tool that lets you download and run large language models (LLMs) entirely on your own machine β no internet connection required after the initial model download, and no API key needed.
When you run ollama serve, it starts a small local server (by default on http://localhost:11434) that Entelgia connects to exactly the same way it would connect to a remote API β except everything stays on your hardware.
Why Ollama exists: cloud-hosted LLMs charge per token and send your prompts to third-party servers. Ollama gives you a private, cost-free alternative.
| Local (Ollama) | Cloud API (Grok / OpenAI / Anthropic) | |
|---|---|---|
| Cost | Free (electricity only) | Pay-per-token (or subscription) |
| Privacy | Fully local β prompts never leave your machine | Prompts sent to the provider's servers |
| Internet | Not required after model download | Always required |
| Setup | Install Ollama + download model (~4β15 GB) | Get an API key, set env variable |
| Speed | Depends on your hardware (CPU/GPU) | Fast, runs on provider's infrastructure |
| Model quality | Good β 7Bβ34B models rival smaller cloud models | Typically state-of-the-art |
| RAM needed | 8 GB+ (16 GB+ recommended for best results) | None on your machine |
- Choose Ollama if you want privacy, zero ongoing cost, or offline use. A machine with 16 GB RAM and a GPU will give the best experience.
- Choose a cloud API (Grok, OpenAI, Anthropic) if you want the highest model quality with minimal local setup, and you are comfortable sharing prompts with a third-party provider.
Both backends are fully supported and can be switched at each startup.
β‘ Get started fast with our automated installer!
# Clone the repository git clone https://github.com/sivanhavkin/Entelgia.git cd Entelgia # Run the automated installer python scripts/install.pyπ View installer source: scripts/install.py
- β Asks you to choose your backend β Ollama (local), Grok (xAI cloud), OpenAI (cloud), or Anthropic (cloud) β before doing anything else
- β
Ollama path only: Detects/installs Ollama (macOS via Homebrew; provides instructions for Linux/Windows) and pulls the
qwen2.5:7bmodel (or lets you skip)β οΈ Note: If you choose Ollama, automatic installation may not work on all platforms. Please check the Manual Installation β Install Ollama section below to install Ollama manually if needed. - β
Creates
.envconfiguration from template - β
Configures API keys in one step β generates a secure
MEMORY_SECRET_KEY; if a cloud backend was chosen, also prompts for the corresponding API key (GROK_API_KEY,OPENAI_API_KEY, orANTHROPIC_API_KEY) - β
Installs Python dependencies from
requirements.txt
# run the full system (30 minutes, stops when time limit is reached) python Entelgia_production_meta.py # Or run 200 turns with no time-based stopping (guaranteed to complete all turns) python Entelgia_production_meta_200t.pyπ‘ Having issues? Check the Troubleshooting Guide for common problems and solutions.
If automatic installation isn't possible, follow these steps:
Entelgia requires Ollama for local LLM execution.
macOS:
brew install ollamaLinux:
curl -fsSL https://ollama.com/install.sh | shWindows:
- Download installer from ollama.com/download/windows
- Or use WSL2 with the Linux installation method
π More info: ollama.com
ollama pull qwen2.5:7bRecommended models (8GB+ RAM recommended):
β οΈ Practical minimum: Entelgia requires a 7B-parameter or larger model (e.g.,qwen2.5:7b,llama3.1:8b, ormistral:latest). Smaller models may execute but do not reliably handle the architecture's reflective, memory-heavy, multi-layer reasoning demands.
- qwen2.5:7b β Recommended default; strong reasoning and instruction following
- llama3.1:8b β Excellent general-purpose performance
- mistral:latest β Balanced reasoning and conversational coherence
- llama3.1:70b or larger β Best results for deep philosophical dialogue
pip install -r requirements.txt# Copy environment template cp .env.example .env # Generate secure key (or add your own) python -c "import secrets; print(secrets.token_hex(32))" # Add the key to .env file: # MEMORY_SECRET_KEY=<generated-key># Start Ollama (if not already running) ollama serve # run the full system (30 minutes, stops when time limit is reached) python Entelgia_production_meta.py # Or run 200 turns with no time-based stopping (guaranteed to complete all turns) python Entelgia_production_meta_200t.pyEntelgia supports Grok (by xAI) as an alternative cloud-based LLM backend alongside the default Ollama local backend.
- Go to https://console.x.ai and sign in with your X (Twitter) account.
- In the left sidebar click "API Keys".
- Click "Create API Key", give it a name, and copy the generated key.
During installation (python scripts/install.py) you will be prompted to enter your Grok API key β it is saved automatically.
To add it manually, open your .env file and set:
GROK_API_KEY=your_key_here When you run Entelgia, it will interactively ask you to choose a backend:
Select backend: [1] grok [2] ollama [3] openai [4] anthropic [0] defaults (keep config as-is) Choose [1] grok and then select a model for each agent from the available Grok models:
| Model | Description |
|---|---|
grok-4.20-multi-agent | Multi-agent capable, latest |
grok-4-1-fast-reasoning | Fast reasoning, high performance |
π‘ The Grok backend requires an active internet connection and a valid
GROK_API_KEYin.env. No local Ollama instance is needed when using Grok.
Entelgia supports OpenAI as a cloud-based LLM backend. No local Ollama instance is needed when using OpenAI.
- Go to https://platform.openai.com and sign in.
- In the left sidebar click "API keys".
- Click "Create new secret key", give it a name, and copy the generated key.
During installation (python scripts/install.py) you will be prompted to enter your OpenAI API key β it is saved automatically.
To add it manually, open your .env file and set:
OPENAI_API_KEY=your_key_here When you run Entelgia, choose [3] openai from the backend menu. You will then be prompted to select an OpenAI model:
| Model | Description |
|---|---|
gpt-4.1 | Latest GPT-4.1 model |
gpt-4o | GPT-4o multimodal model |
gpt-4o-mini | Fast and affordable GPT-4o variant |
gpt-4.1-mini | Compact GPT-4.1 model |
π‘ The OpenAI backend requires an active internet connection and a valid
OPENAI_API_KEYin.env. No local Ollama instance is needed when using OpenAI.
Entelgia supports Anthropic (Claude) as a cloud-based LLM backend. No local Ollama instance is needed when using Anthropic.
- Go to https://console.anthropic.com and sign in.
- In the left sidebar click "API Keys".
- Click "Create Key", give it a name, and copy the generated key.
During installation (python scripts/install.py) you will be prompted to enter your Anthropic API key β it is saved automatically.
To add it manually, open your .env file and set:
ANTHROPIC_API_KEY=your_key_here When you run Entelgia, choose [4] anthropic from the backend menu. You will then be prompted to select a Claude model:
| Model | Description |
|---|---|
claude-opus-4-6 | Most capable Claude model |
claude-sonnet-4-6 | Balanced performance and speed |
claude-haiku-4-5 | Fast and lightweight |
π‘ The Anthropic backend requires an active internet connection and a valid
ANTHROPIC_API_KEYin.env. No local Ollama instance is needed when using Anthropic.
For development or integration purposes:
# Install from GitHub (recommended) pip install git+https://github.com/sivanhavkin/Entelgia.git # Or clone and install in editable mode git clone https://github.com/sivanhavkin/Entelgia.git cd Entelgia pip install -e .pip install --upgrade git+https://github.com/sivanhavkin/Entelgia.git@mainEntelgia provides a utility to clear stored memories when needed. The clear_memory.py script allows you to delete:
- Short-term memory (JSON files in
entelgia_data/stm_*.json) - Long-term memory (SQLite database in
entelgia_data/entelgia_memory.sqlite) - All memories (both short-term and long-term)
python scripts/clear_memory.pyThe script will prompt you with an interactive menu:
============================================================ Entelgia Memory Deletion Utility ============================================================ What would you like to delete? 1. Short-term memory (JSON files) 2. Long-term memory (SQLite database) 3. All memories (both short-term and long-term) 4. Exit Safety features:
β οΈ Confirmation required before deletion- π Shows count of files/entries before deletion
- π Cannot be undone - use with caution
- Reset experiments - Start fresh with new dialogue sessions
- Privacy concerns - Remove stored conversation data
- Testing - Clear state between test runs
- Storage management - Free up disk space
Note: Deleting memories will remove all dialogue history and context. The system will start fresh on the next run.
- π Full Whitepaper - Complete architectural and theoretical foundation
- π System Specification (SPEC.md) - Detailed architecture specification
- π Architecture Overview (ARCHITECTURE.md) - High-level and component design
- πΊοΈ Roadmap (ROADMAP.md) - Project development roadmap and future plans
- π Entelgia Demo(entelgia_demo.py) - See the system in action
- π Web Research Demo (entelgia_research_demo.py) - Fixy-triggered web search demo
- β FAQ - Frequently asked questions and answers
- π§ Troubleshooting Guide - Common issues and solutions
- π§ͺ Test Suite (tests/README.md) - Full test documentation and CI/CD details
- βοΈ Configuration (docs/CONFIGURATION.md) - All configuration options
- Multi-agent dialogue (Socrates Β· Athena Β· Fixy)
- Persistent memory β short-term (JSON) + long-term (SQLite) with π HMAC-SHA256 integrity
- Enhanced Dialogue Engine β dynamic speaker selection, seed strategies,
AgentModeconstants - π¨ Topic-Aware Style Selection
- π¨ Two-Layer Tone Enforcement
- π Dialogue Loop Guard
- π Semantic Repetition Detection
- π« Observer Toggle
- β‘ Energy-Based Regulation β dream cycle consolidation, hallucination-risk detection
- π§ Personal Long-Term Memory β DefenseMechanism, FreudianSlip, SelfReplication
- ποΈ Drive-Aware Cognition β dynamic LLM temperature, superego critique, ego-driven memory depth
- π§ Limbic Hijack
- π₯ Drive Pressure
- π Dialogue Quality Metrics
- π¬ Ablation Study
- π‘οΈ Safety & Quality β PII redaction, output artifact cleanup, memory poisoning protection
- π Web Research Module
- ποΈ Forgetting Policy
- π‘ Affective Routing
- π·οΈ Confidence Metadata
For all configuration options, see docs/CONFIGURATION.md.
Entelgia is built around a modular CoreMind system β a layered stack of cognitive modules that work together to enable persistent, reflective, and psychologically grounded multi-agent dialogue.
| Module | Role |
|---|---|
Conscious | Reflective narrative construction |
Memory | Persistent identity continuity across sessions |
Emotion | Affective weighting & regulation |
Language | Dialogue-driven cognition |
Behavior | Goal-oriented response shaping |
Observer | Meta-level monitoring & correction |
EnergyRegulator | Cognitive energy supervision & dream cycles |
WebResearch | External knowledge retrieval & credibility evaluation |
Each dialogue is driven by three agents with distinct psychological profiles:
| Agent | Personality | Role |
|---|---|---|
| ποΈ Socrates | Investigative questioner | Domain-aware inquiry; probes assumptions and causal chains |
| π¦ Athena | Strategic synthesizer | Builds frameworks and structured explanations in domain vocabulary |
| π Fixy | Meta-cognitive supervisor | Diagnostic observer; identifies contradictions and reasoning gaps |
For the complete file and module layout β including all entelgia/ modules, entry points, scripts, tests, and docs β see PACKAGE_STRUCTURE.md.
Entelgia ships with 1274 tests across 33 suites.
For full test documentation, per-suite details, CI/CD pipeline information, and sample output, see the Test Suite README (tests/README.md).
To run all tests:
pytest tests/ -v| Version | Status | Notes |
|---|---|---|
| v4.1.0 | β Latest | current |
| v4.0.0 | β Stable | previous stable release |
| v3.0.0 | Use v4.1.0 instead | |
| v2.8.1 | Use v4.1.0 instead | |
| v2.8.0 | Use v4.1.0 instead |
π‘ Note: Starting from v2.1.1, we follow a controlled release schedule. Not every commit results in a new version. For the full version history, see Changelog.md.
- ποΈ Minor releases: Every week (feature batches)
- π Patch releases: As needed for critical bugs
- π¨ Hotfixes: Within 24h for security issues
π See Changelog.md for detailed version history.
1. Internal Structural Mechanisms and Dialogue Stability in Multi-Agent Language Systems: An Ablation Study
Sivan Havkin (2026)
Independent Researcher, Entelgia Labs
DOI: https://doi.org/10.5281/zenodo.18754895
This paper presents an ablation study examining how internal structural mechanisms influence dialogue stability and progression in multi-agent language systems.
2. The Entelgia architecture is documented in the following research preprint:
DOI: https://doi.org/10.5281/zenodo.18774720
This publication describes the cognitive-agent framework, META behavioral metrics, and the reproducibility methodology behind the project.
If you use or reference this work, please cite:
@misc{havkin2026entelgia, author = {Havkin, Sivan}, title = {Internal Structural Mechanisms and Dialogue Stability in Multi-Agent Language Systems: An Ablation Study}, year = {2026}, publisher = {Zenodo}, doi = {10.5281/zenodo.18752028}, url = {https://doi.org/10.5281/zenodo.18752028} }@misc{havkin2026attractors, author = {Havkin, Sivan}, title = {Personality Attractors and Dominance Lock in Dialogue-Based Cognitive Agents: An Exploratory Study within the Entelgia Architecture}, year = {2026}, publisher = {Zenodo}, doi = {10.5281/zenodo.18774720}, url = {https://doi.org/10.5281/zenodo.18774720} }Entelgia is released under the MIT License.
This ensures the project remains open, permissive, and compatible with the broader openβsource ecosystem, encouraging research, experimentation, and collaboration.
For the complete legal terms, see the LICENSE file included in this repository.
Conceived and developed by Sivan Havkin.
- Status: Research / Production Hybrid
- Version: 4.1.0
- Last Updated: 26 March 2026



