The memory-mcp platform is currently in development. The Self-hosted Community Edition is available now!

Memory MCP

Memory MCP is a server implementation of the Model Context Protocol (MCP) that provides persistent, shared memory for AI assistants.

It allows AI systems to store memories, retrieve relevant context using semantic search, and maintain continuity across conversations - even across different sessions, tools, or AI platforms.

Why Memory MCP exists #

Most AI assistants are effectively stateless. Once a conversation ends, important context is lost.

Memory MCP acts as a long-lived memory backend that AI clients can connect to using MCP. This enables:

  • Long-term conversational memory
  • Knowledge persistence beyond a single chat
  • Shared memory between multiple AI systems
  • Consistent context across platforms and sessions

Multiple AI clients - such as different models, tools, or vendors - can connect to the same Memory MCP server and share the same memory space, allowing them to collaborate using a common source of truth.

What Memory MCP provides #

At a high level, Memory MCP offers:

  • Persistent storage for conversational memory
  • Semantic retrieval using vector embeddings
  • Namespaces to isolate memory contexts
  • Secure access via tokens or OAuth
  • A standard MCP interface compatible with multiple AI clients

Memory is stored independently of any single AI provider, helping avoid vendor lock-in and enabling flexible multi-AI workflows.

How to use this documentation #

This documentation covers both the core Memory MCP concepts and the Community Edition server implementation.

  • Start here to understand what Memory MCP is and how it works
  • See the Community Edition section for self-hosting, configuration, and available tools
  • Additional sections cover authentication, namespaces, and usage patterns

Community Edition

Self-hosted persistent memory for AI conversations via MCP. Full control, no limits, and complete ownership of your data

?

AI Assistant

0/500