Related MCP Server Resources

Explore more AI models, providers, and integration options:

  • Explore AI Models
  • Explore AI Providers
  • Explore MCP Servers
  • LangDB Pricing
  • Documentation
  • AI Industry Blog
  • mem0 MCP Server
  • TxtAi Memory Vector Server
  • doc-lib-mcp
  • Xano MCP Server for Smithery
  • Perplexity AI MCP Server
Back to MCP Servers
MCP-Mem0

MCP-Mem0

Public
coleam00/mcp-mem0

Provides long-term memory management for AI agents via the Model Context Protocol, enabling semantic storage, retrieval, and search of memories with seamless integration and support for multiple LLM providers and databases.

python
0 tools
May 30, 2025
Updated Jun 4, 2025

Supercharge Your AI with MCP-Mem0

MCP Server

Unlock the full potential of MCP-Mem0 through LangDB's AI Gateway. Get enterprise-grade security, analytics, and seamless integration with zero configuration.

Unified API Access
Complete Tracing
Instant Setup
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

MCP-Mem0: Long-Term Memory for AI Agents

A template implementation of the Model Context Protocol (MCP) server integrated with Mem0 for providing AI agents with persistent memory capabilities.

Use this as a reference point to build your MCP servers yourself, or give this as an example to an AI coding assistant and tell it to follow this example for structure and code correctness!

Overview

This project demonstrates how to build an MCP server that enables AI agents to store, retrieve, and search memories using semantic search. It serves as a practical template for creating your own MCP servers, simply using Mem0 and a practical example.

The implementation follows the best practices laid out by Anthropic for building MCP servers, allowing seamless integration with any MCP-compatible client.

Features

The server provides three essential memory management tools:

  1. save_memory: Store any information in long-term memory with semantic indexing
  2. get_all_memories: Retrieve all stored memories for comprehensive context
  3. search_memories: Find relevant memories using semantic search

Prerequisites

  • Python 3.12+
  • Supabase or any PostgreSQL database (for vector storage of memories)
  • API keys for your chosen LLM provider (OpenAI, OpenRouter, or Ollama)
  • Docker if running the MCP server as a container (recommended)

Installation

Using uv

  1. Install uv if you don't have it:

    pip install uv
  2. Clone this repository:

    git clone https://github.com/coleam00/mcp-mem0.git cd mcp-mem0
  3. Install dependencies:

    uv pip install -e .
  4. Create a .env file based on .env.example:

    cp .env.example .env
  5. Configure your environment variables in the .env file (see Configuration section)

Using Docker (Recommended)

  1. Build the Docker image:

    docker build -t mcp/mem0 --build-arg PORT=8050 .
  2. Create a .env file based on .env.example and configure your environment variables

Configuration

The following environment variables can be configured in your .env file:

VariableDescriptionExample
TRANSPORTTransport protocol (sse or stdio)sse
HOSTHost to bind to when using SSE transport0.0.0.0
PORTPort to listen on when using SSE transport8050
LLM_PROVIDERLLM provider (openai, openrouter, or ollama)openai
LLM_BASE_URLBase URL for the LLM APIhttps://api.openai.com/v1
LLM_API_KEYAPI key for the LLM providersk-...
LLM_CHOICELLM model to usegpt-4o-mini
EMBEDDING_MODEL_CHOICEEmbedding model to usetext-embedding-3-small
DATABASE_URLPostgreSQL connection stringpostgresql://user:pass@host:port/db

Running the Server

Using uv

SSE Transport

# Set TRANSPORT=sse in .env then: uv run src/main.py

The MCP server will essentially be run as an API endpoint that you can then connect to with config shown below.

Stdio Transport

With stdio, the MCP client iself can spin up the MCP server, so nothing to run at this point.

Using Docker

SSE Transport

docker run --env-file .env -p:8050:8050 mcp/mem0

The MCP server will essentially be run as an API endpoint within the container that you can then connect to with config shown below.

Stdio Transport

With stdio, the MCP client iself can spin up the MCP server container, so nothing to run at this point.

Integration with MCP Clients

SSE Configuration

Once you have the server running with SSE transport, you can connect to it using this configuration:

{ "mcpServers": { "mem0": { "transport": "sse", "url": "http://localhost:8050/sse" } } }

Note for Windsurf users: Use serverUrl instead of url in your configuration:

{ "mcpServers": { "mem0": { "transport": "sse", "serverUrl": "http://localhost:8050/sse" } } }

Note for n8n users: Use host.docker.internal instead of localhost since n8n has to reach outside of it's own container to the host machine:

So the full URL in the MCP node would be: http://host.docker.internal:8050/sse

Make sure to update the port if you are using a value other than the default 8050.

Python with Stdio Configuration

Add this server to your MCP configuration for Claude Desktop, Windsurf, or any other MCP client:

{ "mcpServers": { "mem0": { "command": "your/path/to/mcp-mem0/.venv/Scripts/python.exe", "args": ["your/path/to/mcp-mem0/src/main.py"], "env": { "TRANSPORT": "stdio", "LLM_PROVIDER": "openai", "LLM_BASE_URL": "https://api.openai.com/v1", "LLM_API_KEY": "YOUR-API-KEY", "LLM_CHOICE": "gpt-4o-mini", "EMBEDDING_MODEL_CHOICE": "text-embedding-3-small", "DATABASE_URL": "YOUR-DATABASE-URL" } } } }

Docker with Stdio Configuration

{ "mcpServers": { "mem0": { "command": "docker", "args": ["run", "--rm", "-i", "-e", "TRANSPORT", "-e", "LLM_PROVIDER", "-e", "LLM_BASE_URL", "-e", "LLM_API_KEY", "-e", "LLM_CHOICE", "-e", "EMBEDDING_MODEL_CHOICE", "-e", "DATABASE_URL", "mcp/mem0"], "env": { "TRANSPORT": "stdio", "LLM_PROVIDER": "openai", "LLM_BASE_URL": "https://api.openai.com/v1", "LLM_API_KEY": "YOUR-API-KEY", "LLM_CHOICE": "gpt-4o-mini", "EMBEDDING_MODEL_CHOICE": "text-embedding-3-small", "DATABASE_URL": "YOUR-DATABASE-URL" } } } }

Building Your Own Server

This template provides a foundation for building more complex MCP servers. To build your own:

  1. Add your own tools by creating methods with the @mcp.tool() decorator
  2. Create your own lifespan function to add your own dependencies (clients, database connections, etc.)
  3. Modify the utils.py file for any helper functions you need for your MCP server
  4. Feel free to add prompts and resources as well with @mcp.resource() and @mcp.prompt()
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Related MCPs5
  • mem0 MCP Server
    mem0 MCP Server

    TypeScript implementation of a Model Context Protocol server offering memory stream creation, conten...

    Added May 30, 2025
  • TxtAi Memory Vector Server
    TxtAi Memory Vector Server

    Model Context Protocol server offering advanced semantic search, persistent memory management, tag-b...

    Added May 30, 2025
  • doc-lib-mcp
    doc-lib-mcp

    Model Context Protocol server enabling document ingestion, chunking, semantic search, and advanced n...

    Added May 30, 2025
  • Xano MCP Server for Smithery
    Xano MCP Server for Smithery

    Model Context Protocol server enabling seamless integration between Claude AI and Xano databases wit...

    Added May 30, 2025
  • Perplexity AI MCP Server
    Perplexity AI MCP Server

    Provides seamless integration with Perplexity AI via Model Context Protocol, enabling chat, search, ...

    5 tools
    Added May 30, 2025