Skip to content

NevaMind-AI/memU

Repository files navigation

MemU Banner

MemU: A Future-Oriented Agentic Memory System

PyPI version License: Apache 2.0 Python 3.8+ Discord Twitter

MemU is an agentic memory framework for LLM and AI agent backends. It receive multi-modal inputs, extracts them into memory items, and then organizes and summarizes these items into structured memory files.

Unlike traditional RAG systems that rely solely on embedding-based search, MemU supports non-embedding retrieval through direct file reading. The LLM comprehends natural language memory files directly, enabling deep search by progressively tracking from categories β†’ items β†’ original resources.

MemU offers several convenient ways to get started right away:


⭐ Star Us on GitHub

Star MemU to get notified about new releases and join our growing community of AI developers building intelligent agents with persistent memory capabilities. star-us

πŸ’¬ Join our Discord community: https://discord.gg/memu


Roadmap

MemU v0.3.0 has been released! This version initializes the memorize and retrieve workflows with the new 3-layer architecture.

Starting from this release, MemU will roll out multiple features in the short- to mid-term:

Core capabilities iteration

  • Multi-modal enhancements – Support for images, audio, and video
  • Intention – Higher-level decision-making and goal management
  • Multi-client support – Switch between OpenAI, Deepseek, Gemini, etc.
  • Data persistence expansion – Support for Postgres, S3, DynamoDB
  • Benchmark tools – Test agent performance and memory efficiency
  • ……

Upcoming open-source repositories

  • memU-ui – The web frontend for MemU, providing developers with an intuitive and visual interface
  • memU-server – Powers memU-ui with reliable data support, ensuring efficient reading, writing, and maintenance of agent memories

🧩 Why MemU?

Most memory systems in current LLM pipelines rely heavily on explicit modeling, requiring manual definition and annotation of memory categories. This limits AI’s ability to truly understand memory and makes it difficult to support diverse usage scenarios.

MemU offers a flexible and robust alternative, inspired by hierarchical storage architecture in computer systems. It progressively transforms heterogeneous input data into queryable and interpretable textual memory.

Its core architecture consists of three layers: Resource Layer β†’ Memory Item Layer β†’ MemoryCategory Layer.

Three-Layer Architecture Diagram
  • Resource Layer: A multimodal raw data warehouse, also serving as the ground truth layer, providing a semantic foundation for the memory system.

  • Memory Item Layer: A unified semantic abstraction layer, functioning as the system’s semantic cache, supplying high-density semantic vectors for downstream retrieval and reasoning.

  • MemoryCategory Layer: A thematic document layer, mimicking human working memory mechanisms, balancing short-term response efficiency and long-term information completeness.

Through this three-layer design, MemU brings genuine memory into the agent layer, achieving:

  • Full Traceability: Complete traceability across the three layersβ€”from raw data β†’ memory items β†’ aggregated documents. Enables bidirectional tracking of each knowledge piece’s source and evolution, ensuring transparency and interpretability.

  • End-to-End Memory Lifecycle Management: The three core processes correspond to the memory lifecycle: Memorization β†’ Retrieval β†’ Self-evolution.

  • Coherent and Scalable Memorization: During memorization, the system maintains memory coherence while automatically managing resources to support sustainable expansion.

  • Efficient and Interpretable Retrieval: Retrieves information efficiently while preserving interpretability, supporting cross-theme and cross-modal semantic queries and reasoning. The system offers two retrieval methods:

    • RAG-based Retrieval: Fast embedding-based vector search for efficient large-scale retrieval
    • LLM-based Retrieval: Direct file reading through natural language understanding, allowing deep search by tracking step-by-step from categories β†’ items β†’ original resources without relying on embedding search
  • Self-Evolving Memory: A feedback-driven mechanism continuously adapts the memory structure according to real usage patterns.

process

πŸš€Get Started

Installation

pip install memu-py

Basic Usage

from memu.app import MemoryService import logging async def test_memory_service(): logging.basicConfig( level=logging.INFO, format="%(asctime)s [%(levelname)s] %(name)s: %(message)s", ) logger = logging.getLogger("memu") logger.setLevel(logging.DEBUG) # Initialize MemoryService with your OpenAI API key service = MemoryService(llm_config={"api_key": "your-openai-api-key"}) # Memorize a conversation memory = await service.memorize( resource_url="tests/example/example_conversation.json", modality="conversation" ) # Test 1: RAG-based Retrieval with query context # Multiple queries enable automatic query rewriting with context print("\n[Test 1] RAG-based Retrieval with query context") queries_with_context = [ {"role": "user", "content": {"text": "Tell me about the user's preferences"}}, {"role": "assistant", "content": {"text": "I can help you with that. Let me search the memory."}}, {"role": "user", "content": {"text": "What are their habits?"}}, ] retrieved_rag = await service.retrieve(queries=queries_with_context) print(f"Needs retrieval: {retrieved_rag.get('needs_retrieval')}") print(f"Original query: {retrieved_rag.get('original_query')}") print(f"Rewritten query: {retrieved_rag.get('rewritten_query')}") print(f"Next step query: {retrieved_rag.get('next_step_query')}") print(f"Results: {len(retrieved_rag.get('categories', []))} categories, " f"{len(retrieved_rag.get('items', []))} items") # Test 2: Single query without context (no rewriting) print("\n[Test 2] Single query without context") queries_no_context = [ {"role": "user", "content": {"text": "What are their habits?"}} ] retrieved_single = await service.retrieve(queries=queries_no_context) print(f"Needs retrieval: {retrieved_single.get('needs_retrieval')}") print(f"Original query: {retrieved_single.get('original_query')}") print(f"Rewritten query: {retrieved_single.get('rewritten_query')}") print(f"Next step query: {retrieved_single.get('next_step_query')}") print(f"Results: {len(retrieved_single.get('categories', []))} categories, " f"{len(retrieved_single.get('items', []))} items") if __name__ == "__main__": import asyncio asyncio.run(test_memory_service())

Understanding Retrieval Methods

MemU provides two distinct retrieval approaches, each optimized for different scenarios:

Query Structure

Queries are passed as a list of message objects in the format:

[ {"role": "user", "content": {"text": "Tell me about the user's preferences"}}, {"role": "assistant", "content": {"text": "I can help you with that."}}, {"role": "user", "content": {"text": "What are their habits?"}} ]
  • Roles can be user, assistant, or other custom roles
  • The last query in the list is the current query
  • Previous queries (with their roles) provide context for automatic query rewriting
  • If only one query is provided, no rewriting occurs
  • The system returns a next_step_query to suggest the next retrieval step

1. RAG-based Retrieval (method="rag")

Fast embedding-based vector search using cosine similarity. Ideal for:

  • Large-scale datasets
  • Real-time performance requirements
  • Cost-effective retrieval at scale

The system progressively searches through three layers:

  1. Category Layer: Searches category summaries
  2. Item Layer: Searches memory items within relevant categories
  3. Resource Layer: Tracks back to original multimodal resources (conversations, documents, videos, etc.)

At each tier, the system judges if sufficient information has been found and dynamically rewrites the query with context for deeper search.

2. LLM-based Retrieval (method="llm")

Direct file reading through natural language understanding. Ideal for:

  • Complex semantic queries requiring nuanced understanding
  • Deep contextual reasoning
  • Scenarios where interpretability is critical

This method uses the LLM to:

  • Read and comprehend natural language memory files directly
  • Rank results based on semantic relevance
  • Provide reasoning for each ranked result
  • Track step-by-step from categories β†’ items β†’ original resources without relying on embeddings

Both methods support:

  • Full traceability: Each retrieved item includes its resource_id, allowing you to trace back to the original source
  • Context-aware rewriting: Automatically resolves pronouns and references using previous queries as context
  • Pre-retrieval decision: Intelligently determines if memory retrieval is needed for the query
  • Progressive search: Stops early if sufficient information is found at higher layers
  • Next step suggestion: Returns next_step_query for iterative multi-turn retrieval

πŸ“„ License

By contributing to MemU, you agree that your contributions will be licensed under the Apache License 2.0.


🌍 Community

For more information please contact info@nevamind.ai

  • GitHub Issues: Report bugs, request features, and track development. Submit an issue

  • Discord: Get real-time support, chat with the community, and stay updated. Join us

  • X (Twitter): Follow for updates, AI insights, and key announcements. Follow us


🀝 Ecosystem

We're proud to work with amazing organizations:

Development Tools

Ten OpenAgents Ten xRoute jazz buddie bytebase LazyLLM


Interested in partnering with MemU? Contact us at contact@nevamind.ai