Skip to content

MCP Server

MemoryMesh includes a built-in MCP (Model Context Protocol) server that lets AI assistants use your memory directly as a tool. No API keys required for the default setup.

Quick Setup

The fastest way to configure everything:

memorymesh init

This auto-detects your installed AI tools and configures all of them. Or set up manually below.

Try Online (No Install)

Connect to the hosted MemoryMesh server via Smithery -- no local installation needed:

npx -y @smithery/cli install @sparkvibe-io/memorymesh --client claude

Supports 20+ MCP clients. For production use, we recommend the local installation below.

Setup by Tool

Add to ~/.claude/settings.json:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Add to your Gemini CLI MCP settings (~/.gemini/settings.json or project-level config):

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Add to your Codex CLI MCP settings (~/.codex/config.json or project-level config):

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Add to .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Add to your Windsurf MCP settings:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

GitHub Copilot reads AGENTS.md, CLAUDE.md, and GEMINI.md files. Use MemoryMesh sync to keep these files updated:

memorymesh sync --format all

Copilot doesn't support MCP directly, but it benefits from the synced memory files.

Any tool that supports the MCP protocol can connect to MemoryMesh. The MCP server uses stdin/stdout JSON-RPC:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Recommended: Enable Semantic Search with Ollama

By default, the MCP server uses keyword matching. For significantly better recall accuracy, set up Ollama for semantic search. With Ollama, recall("testing") finds memories about "pytest", "unit tests", and "CI pipeline" -- not just exact word matches. See Enabling Semantic Search below.

Environment Variables

Variable Default Description
MEMORYMESH_PATH Auto-detected Path to the project SQLite database
MEMORYMESH_GLOBAL_PATH ~/.memorymesh/global.db Path to the global SQLite database
MEMORYMESH_PROJECT_ROOT Auto-detected Project root directory
MEMORYMESH_EMBEDDING none Embedding provider (none, local, ollama, openai)
MEMORYMESH_OLLAMA_MODEL nomic-embed-text Ollama model name
OPENAI_API_KEY -- Required only when using openai embeddings

Enabling Semantic Search (Ollama)

By default, the MCP server uses keyword matching (MEMORYMESH_EMBEDDING=none). We strongly recommend enabling Ollama for semantic search -- it dramatically improves recall quality. With Ollama, searching for "testing" finds memories about "pytest", "unit tests", and "CI pipeline", not just exact keyword matches.

Step 1: Install and start Ollama (one-time setup):

# macOS
brew install ollama && ollama pull nomic-embed-text

# Linux
curl -fsSL https://ollama.com/install.sh | sh && ollama pull nomic-embed-text

Step 2: Update your MCP config to enable Ollama embeddings:

Update ~/.claude/settings.json:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp",
      "env": {
        "MEMORYMESH_EMBEDDING": "ollama",
        "MEMORYMESH_OLLAMA_MODEL": "nomic-embed-text"
      }
    }
  }
}

Update claude_desktop_config.json:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp",
      "env": {
        "MEMORYMESH_EMBEDDING": "ollama",
        "MEMORYMESH_OLLAMA_MODEL": "nomic-embed-text"
      }
    }
  }
}

Update your Gemini CLI MCP settings:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp",
      "env": {
        "MEMORYMESH_EMBEDDING": "ollama",
        "MEMORYMESH_OLLAMA_MODEL": "nomic-embed-text"
      }
    }
  }
}

Update your Codex CLI MCP settings:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp",
      "env": {
        "MEMORYMESH_EMBEDDING": "ollama",
        "MEMORYMESH_OLLAMA_MODEL": "nomic-embed-text"
      }
    }
  }
}

Update .cursor/mcp.json or equivalent:

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp",
      "env": {
        "MEMORYMESH_EMBEDDING": "ollama",
        "MEMORYMESH_OLLAMA_MODEL": "nomic-embed-text"
      }
    }
  }
}

Ollama runs locally -- no API keys, no cloud, no cost. See Configuration > Using Ollama for full setup details and troubleshooting.

Verify Your Setup

After configuring the MCP server, verify it's working:

memorymesh status

You should see output showing:

  • Project store: Path to your project database (or "not configured" if no project root detected)
  • Global store: Path to ~/.memorymesh/global.db
  • Embedding provider: The configured provider (e.g., "none", "ollama")
  • Version: The current MemoryMesh version

If the project store shows "not configured", make sure you're running from within a project directory (one containing .git, pyproject.toml, or similar markers).

Teaching Your AI to Use MemoryMesh

Installing the MCP server gives your AI assistant the ability to use memory. But LLMs do not use tools proactively unless you tell them to. MemoryMesh works alongside your AI tool's existing memory -- it adds structured, searchable, cross-tool persistence on top of what's already there.

The fastest way to set everything up is:

memorymesh init

This auto-detects which AI tools you have installed and adds MemoryMesh instructions to each one. You can also target a single tool:

memorymesh init --only claude
memorymesh init --only codex
memorymesh init --only gemini

Below is what each tool needs and the exact text to add if you prefer to do it manually.

Claude Code

Add a ## Memory (MemoryMesh) section to your project's CLAUDE.md:

## Memory (MemoryMesh)

MemoryMesh is configured as an MCP tool in this project. It adds persistent,
structured, cross-tool memory on top of your existing memory system. Use it
alongside your default memory -- it enhances, not replaces.

### At the start of every conversation

Call `mcp__memorymesh__recall` with a summary of the user's request to load
prior context, decisions, and patterns. If `session_start` is available,
call it to load user profile, guardrails, and project context.

### When to `recall`

- **Start of every conversation**: Check for relevant prior context.
- **Before making decisions**: Check if this was decided before.
- **When debugging**: Check if this problem was encountered previously.

### When to `remember`

- **When the user says "remember this"**: Store it with a category.
- **After completing a task**: Store key decisions and patterns.
  Use `category` to classify: `"decision"`, `"pattern"`, `"context"`.
- **When the user teaches you something**: Use `category: "preference"`
  or `category: "guardrail"` -- these auto-route to global scope.
- **After fixing a non-trivial bug**: Use `category: "mistake"`.

### Scope guidance

Categories auto-route scope. If not using categories:
- Use `scope: "project"` for project-specific decisions.
- Use `scope: "global"` for user preferences and identity.

OpenAI Codex CLI

Add a ## Memory (MemoryMesh) section to your project's AGENTS.md:

## Memory (MemoryMesh)

MemoryMesh adds persistent, structured memory on top of your existing system.
It enhances your default memory with semantic search, categories, and
cross-tool sync.

- At the start of every task, call `recall` with a summary to load prior
  context. If `session_start` is available, call it for full user profile.
- Call `recall` before making decisions to check for prior context.
- After completing work, call `remember` with a `category` to store key
  decisions (`"decision"`), patterns (`"pattern"`), or context (`"context"`).
- When the user says "remember this", store it with `category: "preference"`
  or `category: "guardrail"` for user-level facts.

Google Gemini CLI

Add a ## Memory (MemoryMesh) section to your project's GEMINI.md:

## Memory (MemoryMesh)

MemoryMesh adds persistent, structured memory on top of your existing system.
It enhances your default memory with semantic search, categories, and
cross-tool sync.

- At the start of every task, call `recall` with a summary to load prior
  context. If `session_start` is available, call it for full user profile.
- Call `recall` before making decisions to check for prior context.
- After completing work, call `remember` with a `category` to store key
  decisions (`"decision"`), patterns (`"pattern"`), or context (`"context"`).
- When the user says "remember this", store it with `category: "preference"`
  or `category: "guardrail"` for user-level facts.

Generic / Other MCP-Compatible Tools

For any tool that supports MCP:

  1. Add the MCP server config (see Setup by Tool above).
  2. Add instructions to the tool's system prompt telling it to call recall at the start of conversations and remember after completing work. MemoryMesh works alongside existing memory -- no need to disable anything.

Hybrid Memory Architecture

The MCP server uses a hybrid dual-store architecture that separates project-specific and global memories:

~/.memorymesh/
  global.db                    <- user preferences, identity, cross-project facts

<project-root>/.memorymesh/
  memories.db                  <- project-specific memories, decisions, patterns

The project root is automatically detected from MCP client roots, the MEMORYMESH_PROJECT_ROOT environment variable, or the current working directory (if it contains .git or pyproject.toml).

All tools accept an optional scope parameter ("project" or "global"): - remember(scope="project") -- stores in the project database (default) - remember(scope="global") -- stores in the user-wide database - recall() -- searches both databases by default - forget_all(scope="project") -- only clears project memories (default; global is protected)

Available Tools

Once connected, your AI assistant gains these tools:

  • remember -- Store facts, preferences, and decisions. Supports scope, category (auto-routes scope), and auto_categorize (detect category from text).
  • recall -- Search memories by natural language query (supports scope)
  • forget -- Delete a specific memory by ID (searches both stores)
  • forget_all -- Delete all memories in a scope (defaults to project)
  • memory_stats -- View memory count and timestamps (supports scope)
  • session_start -- Retrieve structured context for the start of a new session. Returns user profile, guardrails, common mistakes, and project context. Call this at the beginning of every conversation.
  • update_memory -- Update an existing memory's text, importance, metadata, or scope. Supports cross-scope migration.
  • review_memories -- Audit memories for quality issues. Returns issues list with severity ratings and an overall quality score (0-100).

No API keys are needed for the default setup. The MCP server uses keyword matching out of the box. Add an embedding provider for semantic search.


Back to Home