MemU is an agentic memory framework for LLM and AI agent backends. It receive multi-modal inputs, extracts them into memory items, and then organizes and summarizes these items into structured memory files.
Unlike traditional RAG systems that rely solely on embedding-based search, MemU supports non-embedding retrieval through direct file reading. The LLM comprehends natural language memory files directly, enabling deep search by progressively tracking from categories β items β original resources.
MemU offers several convenient ways to get started right away:
-
One call = response + memory π memU Response API: https://memu.pro/docs#responseapi
-
Try it instantly π https://app.memu.so/quick-start
Star MemU to get notified about new releases and join our growing community of AI developers building intelligent agents with persistent memory capabilities. 
π¬ Join our Discord community: https://discord.gg/memu
MemU v0.3.0 has been released! This version initializes the memorize and retrieve workflows with the new 3-layer architecture.
Starting from this release, MemU will roll out multiple features in the short- to mid-term:
- Multi-modal enhancements β Support for images, audio, and video
- Intention β Higher-level decision-making and goal management
- Multi-client support β Switch between OpenAI, Deepseek, Gemini, etc.
- Data persistence expansion β Support for Postgres, S3, DynamoDB
- Benchmark tools β Test agent performance and memory efficiency
- β¦β¦
- memU-ui β The web frontend for MemU, providing developers with an intuitive and visual interface
- memU-server β Powers memU-ui with reliable data support, ensuring efficient reading, writing, and maintenance of agent memories
Most memory systems in current LLM pipelines rely heavily on explicit modeling, requiring manual definition and annotation of memory categories. This limits AIβs ability to truly understand memory and makes it difficult to support diverse usage scenarios.
MemU offers a flexible and robust alternative, inspired by hierarchical storage architecture in computer systems. It progressively transforms heterogeneous input data into queryable and interpretable textual memory.
Its core architecture consists of three layers: Resource Layer β Memory Item Layer β MemoryCategory Layer.
-
Resource Layer: A multimodal raw data warehouse, also serving as the ground truth layer, providing a semantic foundation for the memory system.
-
Memory Item Layer: A unified semantic abstraction layer, functioning as the systemβs semantic cache, supplying high-density semantic vectors for downstream retrieval and reasoning.
-
MemoryCategory Layer: A thematic document layer, mimicking human working memory mechanisms, balancing short-term response efficiency and long-term information completeness.
Through this three-layer design, MemU brings genuine memory into the agent layer, achieving:
-
Full Traceability: Complete traceability across the three layersβfrom raw data β memory items β aggregated documents. Enables bidirectional tracking of each knowledge pieceβs source and evolution, ensuring transparency and interpretability.
-
End-to-End Memory Lifecycle Management: The three core processes correspond to the memory lifecycle: Memorization β Retrieval β Self-evolution.
-
Coherent and Scalable Memorization: During memorization, the system maintains memory coherence while automatically managing resources to support sustainable expansion.
-
Efficient and Interpretable Retrieval: Retrieves information efficiently while preserving interpretability, supporting cross-theme and cross-modal semantic queries and reasoning. The system offers two retrieval methods:
- RAG-based Retrieval: Fast embedding-based vector search for efficient large-scale retrieval
- LLM-based Retrieval: Direct file reading through natural language understanding, allowing deep search by tracking step-by-step from categories β items β original resources without relying on embedding search
-
Self-Evolving Memory: A feedback-driven mechanism continuously adapts the memory structure according to real usage patterns.
pip install memu-pyfrom memu.app import MemoryService import logging async def test_memory_service(): logging.basicConfig( level=logging.INFO, format="%(asctime)s [%(levelname)s] %(name)s: %(message)s", ) logger = logging.getLogger("memu") logger.setLevel(logging.DEBUG) # Initialize MemoryService with your OpenAI API key service = MemoryService(llm_config={"api_key": "your-openai-api-key"}) # Memorize a conversation memory = await service.memorize( resource_url="tests/example/example_conversation.json", modality="conversation" ) # Test 1: RAG-based Retrieval with query context # Multiple queries enable automatic query rewriting with context print("\n[Test 1] RAG-based Retrieval with query context") queries_with_context = [ {"role": "user", "content": {"text": "Tell me about the user's preferences"}}, {"role": "assistant", "content": {"text": "I can help you with that. Let me search the memory."}}, {"role": "user", "content": {"text": "What are their habits?"}}, ] retrieved_rag = await service.retrieve(queries=queries_with_context) print(f"Needs retrieval: {retrieved_rag.get('needs_retrieval')}") print(f"Original query: {retrieved_rag.get('original_query')}") print(f"Rewritten query: {retrieved_rag.get('rewritten_query')}") print(f"Next step query: {retrieved_rag.get('next_step_query')}") print(f"Results: {len(retrieved_rag.get('categories', []))} categories, " f"{len(retrieved_rag.get('items', []))} items") # Test 2: Single query without context (no rewriting) print("\n[Test 2] Single query without context") queries_no_context = [ {"role": "user", "content": {"text": "What are their habits?"}} ] retrieved_single = await service.retrieve(queries=queries_no_context) print(f"Needs retrieval: {retrieved_single.get('needs_retrieval')}") print(f"Original query: {retrieved_single.get('original_query')}") print(f"Rewritten query: {retrieved_single.get('rewritten_query')}") print(f"Next step query: {retrieved_single.get('next_step_query')}") print(f"Results: {len(retrieved_single.get('categories', []))} categories, " f"{len(retrieved_single.get('items', []))} items") if __name__ == "__main__": import asyncio asyncio.run(test_memory_service())MemU provides two distinct retrieval approaches, each optimized for different scenarios:
Queries are passed as a list of message objects in the format:
[ {"role": "user", "content": {"text": "Tell me about the user's preferences"}}, {"role": "assistant", "content": {"text": "I can help you with that."}}, {"role": "user", "content": {"text": "What are their habits?"}} ]- Roles can be
user,assistant, or other custom roles - The last query in the list is the current query
- Previous queries (with their roles) provide context for automatic query rewriting
- If only one query is provided, no rewriting occurs
- The system returns a
next_step_queryto suggest the next retrieval step
Fast embedding-based vector search using cosine similarity. Ideal for:
- Large-scale datasets
- Real-time performance requirements
- Cost-effective retrieval at scale
The system progressively searches through three layers:
- Category Layer: Searches category summaries
- Item Layer: Searches memory items within relevant categories
- Resource Layer: Tracks back to original multimodal resources (conversations, documents, videos, etc.)
At each tier, the system judges if sufficient information has been found and dynamically rewrites the query with context for deeper search.
Direct file reading through natural language understanding. Ideal for:
- Complex semantic queries requiring nuanced understanding
- Deep contextual reasoning
- Scenarios where interpretability is critical
This method uses the LLM to:
- Read and comprehend natural language memory files directly
- Rank results based on semantic relevance
- Provide reasoning for each ranked result
- Track step-by-step from categories β items β original resources without relying on embeddings
Both methods support:
- Full traceability: Each retrieved item includes its
resource_id, allowing you to trace back to the original source - Context-aware rewriting: Automatically resolves pronouns and references using previous queries as context
- Pre-retrieval decision: Intelligently determines if memory retrieval is needed for the query
- Progressive search: Stops early if sufficient information is found at higher layers
- Next step suggestion: Returns
next_step_queryfor iterative multi-turn retrieval
By contributing to MemU, you agree that your contributions will be licensed under the Apache License 2.0.
For more information please contact info@nevamind.ai
-
GitHub Issues: Report bugs, request features, and track development. Submit an issue
-
Discord: Get real-time support, chat with the community, and stay updated. Join us
-
X (Twitter): Follow for updates, AI insights, and key announcements. Follow us
We're proud to work with amazing organizations:
Interested in partnering with MemU? Contact us at contact@nevamind.ai
