The memory-mcp platform is currently in development. The Self-hosted Community Edition is available now!

Configuration

“CHANGEME is a perfectly secure password - said no one ever, except maybe in a .env.example file."
~ beep

This page is a configuration reference for Memory-MCP-CE (Community Edition).

It assumes you already have the service running locally or in Docker.
If not, see the Overview and Quickstart first.


Production hardening checklist #

Before exposing Memory-MCP-CE to the internet (or to people you don’t fully trust), review the following:

  • POSTGRES_PASSWORD
    Generate with:
    openssl rand -base64 32
    
  • ENCRYPTION_KEY Strongly recommended for production (see Security & Encryption)

  • OAUTH_CLIENT_SECRET, OAUTH_USERNAME, OAUTH_PASSWORD Generate secrets and use real credentials

  • SERVER_URL Set to your public domain (required for OAuth callbacks)

The default credentials are not production-safe. If you see CHANGEME, change it. Seriously.


PostgreSQL configuration #

POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=memory
POSTGRES_PASSWORD=CHANGEME
POSTGRES_DB=memory

These variables configure both:

  • the PostgreSQL container (if using docker-compose), and
  • the MCP-CE application itself.

Notes

  • POSTGRES_HOST=postgres works because it matches the docker-compose service name
  • External databases are supported by pointing POSTGRES_HOST externally
  • Database and user are created automatically on first run

Embedding model configuration #

Memory-MCP-CE supports any OpenAI-compatible embedding API.

EMBEDDING_URL=http://ollama:11434/v1
EMBEDDING_MODEL=granite-embedding:30m
EMBEDDING_API_KEY=

Common providers #

Ollama (local, recommended)

EMBEDDING_URL=http://ollama:11434/v1
EMBEDDING_MODEL=granite-embedding:30m
EMBEDDING_API_KEY=

OpenAI

EMBEDDING_URL=https://api.openai.com/v1
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_API_KEY=sk-...

LM Studio

EMBEDDING_URL=http://localhost:1234/v1
EMBEDDING_MODEL=your-loaded-model
EMBEDDING_API_KEY=

Different embedding models produce different vector sizes (e.g. 384, 768, 1536).

Memories are stored in dimension-specific tables such as: memory_384, memory_768.

Mixing dimensions in a single table is not supported.


Memory isolation (Namespaces) #

NAMESPACE=

Namespaces provide logical isolation within the same database and embedding model.

Behavior:

  • Empty (default): queries all memories in the table
  • Set (e.g. personal, helpdesk, user_123): scopes reads and writes to that namespace

Common use cases:

  • multi-tenant deployments (one namespace per user)
  • separating dev / staging / production
  • isolating different AI agents or projects

Namespaces do not move or rewrite existing data — they only affect retrieval scope.


Security & encryption #

Content encryption (optional) #

ENCRYPTION_KEY=

When enabled, sensitive memory content is encrypted at rest.

  • Encrypted: content
  • Not encrypted: labels, embeddings, source
  • Algorithm: AES-256-GCM
  • Key derivation: Argon2id
  • Salt: random per-memory

Generate a key:

python -c "import secrets; print(secrets.token_urlsafe(32))"

Changing ENCRYPTION_KEY will make previously encrypted memories unreadable unless you implement key rotation.


Bearer token authentication (optional) #

BEARER_TOKEN=

Enables simple API-to-API authentication:

  • Leave empty for local or trusted deployments
  • Set when exposing MCP-CE publicly

Generate a token:

python -c "import uuid; print(uuid.uuid7())"

OAuth configuration #

Bundled OAuth (single-user) #

Memory-MCP-CE includes a built-in OAuth provider intended for personal or single-user deployments.

OAUTH_BUNDLED=false
OAUTH_CLIENT_ID=memory-mcp-ce
OAUTH_CLIENT_SECRET=CHANGEME
OAUTH_USERNAME=CHANGEME
OAUTH_PASSWORD=CHANGEME

This is not a multi-tenant OAuth system. For SaaS or multi-user setups, place MCP-CE behind your own auth proxy.


Token lifetimes #

OAUTH_ACCESS_TOKEN_EXPIRY=3600
OAUTH_REFRESH_TOKEN_EXPIRY=604800
OAUTH_AUTH_CODE_EXPIRY=300

Defaults follow MCP OAuth best practices:

  • Access tokens: short-lived
  • Refresh tokens: longer-lived
  • Auth codes: very short-lived

Redirect URIs #

OAUTH_REDIRECT_URIS=https://claude.ai/api/mcp/auth_callback,https://chatgpt.com/connector_platform_oauth_redirect
PlatformRedirect URI
Claude.aihttps://claude.ai/api/mcp/auth_callback
ChatGPThttps://chatgpt.com/connector_platform_oauth_redirect
Local devhttp://localhost/callback

Multiple URIs may be specified as a comma-separated list.


Deployment settings #

Public server URL #

SERVER_URL=

Required for:

  • OAuth callbacks
  • CORS configuration
  • absolute URL generation

Leave empty for local deployment. Set to your public domain for internet-facing deployments.


Ollama-specific settings #

OLLAMA_KEEP_ALIVE=-1
OLLAMA_MODELS=/models
  • OLLAMA_KEEP_ALIVE=-1 keeps models resident in memory
  • Uses more RAM, improves latency for repeated embedding calls

Custom themes & templates #

You can customize the bundled OAuth login UI using volume mounts:

volumes:
  - ./custom-theme:/mnt/templates
  - ./custom-static:/mnt/static

Behavior:

  • First run: defaults are copied in
  • Subsequent runs: updated defaults appear as .example files

“If you want microscopic login forms that require a magnifying glass, that’s between you and your users."
~ beep


Example configurations #

Minimal local development #

POSTGRES_PASSWORD=CHANGEME
EMBEDDING_URL=http://ollama:11434/v1
EMBEDDING_MODEL=granite-embedding:30m
OAUTH_BUNDLED=false

Production-style deployment #

POSTGRES_PASSWORD=<secure>
EMBEDDING_URL=http://ollama:11434/v1
EMBEDDING_MODEL=granite-embedding:30m
ENCRYPTION_KEY=<secure>
OAUTH_BUNDLED=true
OAUTH_CLIENT_SECRET=<secure>
OAUTH_USERNAME=admin
OAUTH_PASSWORD=<secure>
SERVER_URL=https://memory-mcp.yourdomain.com

Next steps #

?

AI Assistant

0/500