Related MCP Server Resources

Explore more AI models, providers, and integration options:

  • Explore AI Models
  • Explore AI Providers
  • Explore MCP Servers
  • LangDB Pricing
  • Documentation
  • AI Industry Blog
  • AWS Knowledge Base Retrieval MCP Server
  • Xano MCP Server for Smithery
  • TxtAi Memory Vector Server
  • Powertools MCP Search Server
  • GraphRAG MCP Server
Back to MCP Servers
wisdomforge

wisdomforge

Public
hadv/wisdomforge

Advanced Model Context Protocol server enabling intelligent knowledge management, efficient retrieval, and storage of diverse domain insights using Qdrant or Chroma vector databases with seamless AI IDE integration.

typescript
0 tools
May 30, 2025
Updated Jun 4, 2025

Supercharge Your AI with wisdomforge

MCP Server

Unlock the full potential of wisdomforge through LangDB's AI Gateway. Get enterprise-grade security, analytics, and seamless integration with zero configuration.

Unified API Access
Complete Tracing
Instant Setup
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

WisdomForge

A powerful knowledge management system that forges wisdom from experiences, insights, and best practices. Built with Qdrant vector database for efficient knowledge storage and retrieval.

Features

  • Intelligent knowledge management and retrieval
  • Support for multiple knowledge types (best practices, lessons learned, insights, experiences)
  • Configurable database selection via environment variables
  • Uses Qdrant's built-in FastEmbed for efficient embedding generation
  • Domain knowledge storage and retrieval
  • Deployable to Smithery.ai platform

Prerequisites

  • Node.js 20.x or later (LTS recommended)
  • npm 10.x or later
  • Qdrant or Chroma vector database

Installation

  1. Clone the repository:
git clone https://github.com/hadv/wisdomforge cd wisdomforge
  1. Install dependencies:
npm install
  1. Create a .env file in the root directory based on the .env.example template:
cp .env.example .env
  1. Configure your environment variables in the .env file:

Required Environment Variables

Database Configuration

  • DATABASE_TYPE: Choose your vector database (qdrant or chroma)
  • COLLECTION_NAME: Name of your vector collection
  • QDRANT_URL: URL of your Qdrant instance (required if using Qdrant)
  • QDRANT_API_KEY: API key for Qdrant (required if using Qdrant)
  • CHROMA_URL: URL of your Chroma instance (required if using Chroma)

Server Configuration

  • HTTP_SERVER: Set to true to enable HTTP server mode
  • PORT: Port number for local development only (default: 3000). Not used in Smithery cloud deployment.

Example .env configuration for Qdrant:

DATABASE_TYPE=qdrant COLLECTION_NAME=wisdom_collection QDRANT_URL=https://your-qdrant-instance.example.com:6333 QDRANT_API_KEY=your_api_key HTTP_SERVER=true PORT=3000 # Only needed for local development
  1. Build the project:
npm run build

AI IDE Integration

Cursor AI IDE

Add this configuration to your ~/.cursor/mcp.json or .cursor/mcp.json file:

{ "mcpServers": { "wisdomforge": { "command": "npx", "args": [ "-y", "@smithery/cli@latest", "run", "@hadv/wisdomforge", "--key", "YOUR_API_KEY", "--config", "{"database":{"type":"qdrant","collectionName":"YOUR_COLLECTION_NAME","url":"YOUR_QDRANT_URL","apiKey":"YOUR_QDRANT_API_KEY"}}", "--transport", "ws" ] } } }

Replace the following placeholders in the configuration:

  • YOUR_API_KEY: Your Smithery API key
  • YOUR_COLLECTION_NAME: Your Qdrant collection name
  • YOUR_QDRANT_URL: Your Qdrant instance URL
  • YOUR_QDRANT_API_KEY: Your Qdrant API key

Note: Make sure you have Node.js installed and npx available in your PATH. If you're using nvm, ensure you're using the correct Node.js version by running nvm use --lts before starting Cursor.

Claude Desktop

Add this configuration in Claude's settings:

{ "processes": { "knowledge_server": { "command": "/path/to/your/project/run-mcp.sh", "args": [] } }, "tools": [ { "name": "store_knowledge", "description": "Store domain-specific knowledge in a vector database", "provider": "process", "process": "knowledge_server" }, { "name": "retrieve_knowledge_context", "description": "Retrieve relevant domain knowledge from a vector database", "provider": "process", "process": "knowledge_server" } ] }
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Related MCPs5
  • AWS Knowledge Base Retrieval MCP Server
    AWS Knowledge Base Retrieval MCP Server

    Retrieval-Augmented Generation (RAG) server enabling efficient extraction of contextual information ...

    Added May 30, 2025
  • Xano MCP Server for Smithery
    Xano MCP Server for Smithery

    Model Context Protocol server enabling seamless integration between Claude AI and Xano databases wit...

    Added May 30, 2025
  • TxtAi Memory Vector Server
    TxtAi Memory Vector Server

    Model Context Protocol server offering advanced semantic search, persistent memory management, tag-b...

    Added May 30, 2025
  • Powertools MCP Search Server
    Powertools MCP Search Server

    Model Context Protocol server enabling efficient local search of AWS Lambda Powertools documentation...

    2 tools
    Added May 30, 2025
  • GraphRAG MCP Server
    GraphRAG MCP Server

    Model Context Protocol server enabling hybrid semantic and graph-based document retrieval by integra...

    Added May 30, 2025