Skip to content

Lexus2016/turbo_quant_memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Turbo Quant Memory for AI Agents

Turbo Quant Memory hero

Latest release Python 3.11+ MCP server Local-first

Other languages: Russian | Ukrainian

Turbo Quant Memory is the memory layer that makes AI agents feel like long-term teammates instead of short-term chat sessions.

If you use Claude Code, Codex, Cursor, OpenCode, Gemini CLI, or any MCP client, this is how you keep your institutional knowledge alive between tasks.

Why It Matters

Most agent workflows fail in the same place: memory.

  • Great insights disappear in chat history.
  • Every new task restarts from zero.
  • Teams re-explain the same architecture again and again.

Turbo Quant Memory fixes this by making your project knowledge persistent, searchable, and reusable.

Why Teams Choose Turbo Quant Memory

Typical AI workflow With Turbo Quant Memory
Agents forget context between sessions Agents can continue from saved project knowledge
Decisions stay buried in old threads Decisions become reusable notes
Team knowledge stays inside one person's head Knowledge becomes shared, searchable, and portable
Token budget is wasted on repeated reading Context is loaded smarter, so more budget goes to reasoning

The Core Promise

Your agents stop behaving like temporary assistants and start behaving like members of the team.

What Makes It Different

  • Local-first by design: your memory stays under your control.
  • One memory layer for many clients: same knowledge, same standards, same outcomes.
  • Built for real delivery: capture decisions, patterns, and handoffs that compound over time.
  • Transparent and auditable: memory is explicit, structured, and easy to inspect.

Quick Start

Install:

uv tool install git+https://github.com/Lexus2016/turbo_quant_memory@v0.2.4
turbo-memory-mcp serve

Connect in your client as MCP server tqmemory with command turbo-memory-mcp serve.

Client-specific setup files: CLIENT_INTEGRATIONS.md

Who This Is For

  • AI-first engineering teams
  • Solo builders running multiple agents
  • Product teams that want consistent AI execution quality
  • Anyone tired of repeating context every day

Why Pick This

Choose Turbo Quant Memory if you want:

  • faster onboarding for every new task
  • fewer repeated mistakes
  • stronger continuity across sessions
  • higher ROI from every agent run

Learn More

About

Local-first MCP memory server for AI coding agents with compact retrieval and project/global scopes.

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages