Your Memory, Any LLM - Auto-saving conversational memory with MCP support.
┌─────────────────────────────────────────────────────────────┐
│ EASYMEMORY │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Claude │ │ GPT │ │ Gemini │ │ Local │ │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │ │
│ └────────────┴─────┬──────┴────────────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │ MCP Server │ │
│ └──────┬──────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │ Memory │ │
│ │ Store │ │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
- 🔄 Auto-Save: Every conversation automatically saved
- 🔍 Smart Search: Semantic search across all memories
- 🧩 Hybrid Retrieval+: Graph + Vector + Keyword + Built-in Local Knowledge Index (no external libs)
- 📄 Document Support: PDF, DOCX, TXT, Markdown
- 🔌 MCP Server: Works with Claude, GPT, any MCP-compatible LLM
- 💾 100% Local: Your data stays on your machine
- 🏢 Enterprise Security: OAuth2 (Client Credentials), API Keys, rate limit, audit log
- 🔗 Integrations: Slack JSON import + Notion/GDrive folder indexing
- 🚀 Easy Setup: One command to start
# Clone the repo
git clone https://github.com/yourusername/easymemory.git
cd easymemory
# Install in development mode
pip install -e .# Start the MCP server
easymemory-server --port 8100Then configure your LLM client to connect to http://localhost:8100/mcp
Health checks:
http://localhost:8100/healthzhttp://localhost:8100/readyz
# With Ollama
easymemory-agent --provider ollama --model llama3.1:8b
# With OpenAI
easymemory-agent --provider openai --model gpt-4import asyncio
from easymemory.agent import EasyMemoryAgent
async def main():
async with EasyMemoryAgent(
llm_provider="ollama",
model="llama3.1:8b"
) as agent:
# Chat - automatically saves everything!
response = await agent.chat("Hello! Remember that I love Python.")
print(response)
# Later...
response = await agent.chat("What do I love?")
print(response) # "You mentioned that you love Python!"
asyncio.run(main())When running as MCP server, these tools are available:
| Tool | Description |
|---|---|
memory_add |
Save a note/fact/idea |
memory_search |
Search memories (all, conversations, documents, notes, knowledge, hybrid) |
memory_add_file |
Import PDF, DOCX, TXT, MD |
memory_index_path |
Index local markdown/txt vault into internal knowledge index |
memory_list |
List saved memories |
memory_delete |
Delete a specific memory |
memory_stats |
Show statistics |
Example for local vault indexing (no external tools):
{"tool":"memory_index_path","path":"/path/to/vault","recursive":true,"pattern":"*.md"}All data is stored locally in:
- Linux/Mac:
~/.easymemory/data/ - Windows:
C:\Users\<you>\.easymemory\data\
Add to claude_desktop_config.json:
{
"mcpServers": {
"easymemory": {
"url": "http://localhost:8100/mcp"
}
}
}# Optional: Custom data directory
export EASYMEMORY_DATA_DIR=/path/to/data
# Optional: Embedding model (default: BAAI/bge-m3)
export EASYMEMORY_EMBED_MODEL=BAAI/bge-m3
# Optional: MCP server config
export EASYMEMORY_HOST=0.0.0.0
export EASYMEMORY_PORT=8100
export EASYMEMORY_LOG_LEVEL=info
# Optional: LLM resilience
export EASYMEMORY_LLM_TIMEOUT=120
export EASYMEMORY_LLM_MAX_RETRIES=2
export EASYMEMORY_EXTRACT_TIMEOUT=60
export EASYMEMORY_EXTRACT_MAX_RETRIES=1
# Enterprise auth/security
export EASYMEMORY_OAUTH_SECRET=change-me-in-prod
export EASYMEMORY_OAUTH_CLIENTS='{"app-prod":{"secret":"supersecret","tenant_id":"team-prod","roles":["reader","writer"]}}'
export EASYMEMORY_ADMIN_TOKEN=another-secret
export EASYMEMORY_RATE_LIMIT_PER_MIN=180
export EASYMEMORY_IMPORT_ROOTS=/srv/knowledge,/home/user/vault
# 1) Install package + deps
pip install -e .
# 2) Run quick sanity checks
python3 -m compileall src
PYTHONPATH=src python3 -m unittest discover -s tests -v
# 3) Start MCP server
easymemory-server --host 0.0.0.0 --port 8100 --log-level info
# 4) Index your local knowledge vault (optional)
easymemory index --path /path/to/vault --pattern "*.md"
# (alias)
easymemory-index --path /path/to/vault --pattern "*.md"
# 5) Run LoCoMo-style benchmark
easymemory-locomo --provider ollama --model gpt-oss:120b-cloud
# 6) Run "proof" benchmark (single-hop, multi-hop, adversarial)
easymemory prove --profiles 80 --seed 7
# (alias)
easymemory-prove --profiles 80 --seed 7Recommended deployment checks:
- Use
/healthzfor liveness probes - Use
/readyzfor readiness probes - Persist
~/.easymemory/dataon durable volume - Set provider/model via environment for reproducible runs
EasyMemory usa OAuth2 Client Credentials.
# 1) token
curl -X POST http://localhost:8100/oauth/token \
-d "grant_type=client_credentials" \
-d "client_id=app-prod" \
-d "client_secret=supersecret" \
-d "scope=memory:read memory:write"
# 2) query enterprise API
curl -X POST http://localhost:8100/v1/search \
-H "Authorization: Bearer <ACCESS_TOKEN>" \
-H "Content-Type: application/json" \
-d '{"query":"project notes","n_results":10,"search_type":"hybrid"}'Enterprise endpoints:
POST /v1/notesPOST /v1/searchPOST /v1/indexGET /v1/statsPOST /v1/integrations/slack/importPOST /v1/integrations/notion/importPOST /v1/integrations/gdrive/importPOST /admin/api-keys(headerX-Admin-Token)GET /admin/api-keys(headerX-Admin-Token)
- CI workflow included:
.github/workflows/ci.yml(Python 3.10/3.11/3.12) - License included:
LICENSE(MIT) - Clean repo defaults:
.gitignorefor caches/build/local data
easymemory/
├── src/easymemory/
│ ├── core/
│ │ ├── memory_store.py # ChromaDB + embeddings
│ │ └── ingestion.py # Document processing
│ ├── server.py # MCP server facade
│ ├── web_ui.py # MCP tools implementation
│ ├── integration.py # Gradio Web UI
│ ├── agent.py # Orchestrator agent
│ └── main.py # CLI entry points
├── pyproject.toml
└── README.md
MIT License - Use it however you want!
Made with 🧠 by the EasyMemory Team
