A collection of pre-build wrappers over common RAG systems like ChromaDB, Weaviate, Pinecone, and othersz!
- Updated
Oct 27, 2025 - Python
A collection of pre-build wrappers over common RAG systems like ChromaDB, Weaviate, Pinecone, and othersz!
LangGraph Mastery Playbook: guided, code-first lessons for building memory-aware LLM agents and workflows with LangGraph, TrustCall, and LangChain.
🧠 AI Second Brain — 100% Local Knowledge Management Private, self-hosted second brain. Store, search, and synthesize documents, images, and ideas with local LLMs (LLaVA, CLIP) and PostgreSQL + pgvector. Features multimodal search, knowledge graphs, gDrive streaming, and real-time analysis — all offline, no API keys, no cloud, no limits.
MCP Persistent memory systems for LLMs - CASCADE 6-layer memory + Faiss GPU search (<2ms). Give any AI persistent memory across conversations. Open source, MIT license.
Contextual Memory Intelligence for AI Systems - Persistent memory, cognitive tools, and adaptive reasoning capabilities for LLMs (evolved from Clay-CXD)
🧠 Synapse - Supercharge your AI coding assistants with memory, context, MCP tools, and intelligent routing. Works with Cline, Roo, Cursor, and any OpenAI-compatible tool. Just change the API endpoint and get superpowers! ✨
A lightweight, pluggable memory backend for agent-based simulations. Supports temporal data, experience replay, and persistent state logging
Token-ranked neuro-symbolic transformer with SQL working-memory, causal graph reasoning, and adaptive belief consolidation for self-explaining cognition.
A self-reflective LLM agent with tools, memory, and reasoning built using LangChain + ReAct + Reflexion. Modular FastAPI backend + Streamlit UI.
Contextual Memory Intelligence for AI Systems - Persistent memory, cognitive tools, and adaptive reasoning capabilities for LLMs Experimental memory system for LLMs (see MemMimic for optimized version)
Store millions of text chunks inside ultra-compact MP4 files, index them with local embeddings, and retrieve answers instantly for fully offline RAG with any LLM.
Sophisticated memory system for AI assistants with hybrid vector-LLM retrieval, importance-weighted retention scoring, and intelligent deduplication. Built for Open WebUI.
Add a description, image, and links to the memory-systems topic page so that developers can more easily learn about it.
To associate your repository with the memory-systems topic, visit your repo's landing page and select "manage topics."