Related MCP Server Resources

Explore more AI models, providers, and integration options:

  • Explore AI Models
  • Explore AI Providers
  • Explore MCP Servers
  • LangDB Pricing
  • Documentation
  • AI Industry Blog
  • Powertools MCP Search Server
  • Code Analysis MCP Server
  • doc-lib-mcp
  • MCP Filesystem Server
  • GraphRAG MCP Server
Back to MCP Servers
Python Codebase Analysis RAG System

Python Codebase Analysis RAG System

Public
shervinemp/CodebaseMCP

Analyzes Python codebases using AST to extract and vectorize code elements, enabling advanced querying, semantic search, visualization, and natural language Q&A via a Model Context Protocol (MCP) server integrated with LLM-powered embeddings and background refinement.

python
0 tools
May 30, 2025
Updated Jun 4, 2025

Supercharge Your AI with Python Codebase Analysis RAG System

MCP Server

Unlock the full potential of Python Codebase Analysis RAG System through LangDB's AI Gateway. Get enterprise-grade security, analytics, and seamless integration with zero configuration.

Unified API Access
Complete Tracing
Instant Setup
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Configuration Requirements
API Key
Configure authentication and required variables to access this MCP server
Required Environment Variables
EMBEDDING_MODEL_NAME
Optional
string

Gemini model for embeddings

Default: models/embedding-001
SEMANTIC_SEARCH_LIMIT
Optional
string

Limit for semantic search results

Default: 5
GENERATION_MODEL_NAME
Optional
string

Gemini model for text generation

Default: models/gemini-pro
WEAVIATE_PORT
Optional
string

Weaviate HTTP port

Default: 8080
LLM_CONCURRENCY
Optional
string

Max concurrent background LLM tasks (embeddings/descriptions/refinements)

Default: 5
WEAVIATE_BATCH_SIZE
Optional
string

Batch size for Weaviate operations

Default: 100
WEAVIATE_GRPC_PORT
Optional
string

Weaviate gRPC port

Default: 50051
GEMINI_API_KEY
Optional
string

Your Gemini API key

SEMANTIC_SEARCH_DISTANCE
Optional
string

Distance threshold for semantic search

Default: 0.7
GENERATE_LLM_DESCRIPTIONS
Optional
string

Set to true to enable background LLM description generation and refinement

Default: true
WEAVIATE_HOST
Optional
string

Weaviate host address

Default: localhost
WATCHER_POLLING_INTERVAL
Optional
string

File watcher polling interval in seconds

Default: 5

Security Notice

Your environment variables and credentials are securely stored and encrypted. LangDB never shares these configuration values with third parties.

Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Related MCPs5
  • Powertools MCP Search Server
    Powertools MCP Search Server

    Model Context Protocol server enabling efficient local search of AWS Lambda Powertools documentation...

    2 tools
    Added May 30, 2025
  • Code Analysis MCP Server
    Code Analysis MCP Server

    Enables AI-driven natural language exploration and analysis of codebases via Model Context Protocol,...

    4 tools
    Added May 30, 2025
  • doc-lib-mcp
    doc-lib-mcp

    Model Context Protocol server enabling document ingestion, chunking, semantic search, and advanced n...

    Added May 30, 2025
  • MCP Filesystem Server
    MCP Filesystem Server

    Model Context Protocol server enabling secure, efficient filesystem operations with smart context ma...

    Added May 30, 2025
  • GraphRAG MCP Server
    GraphRAG MCP Server

    Model Context Protocol server enabling hybrid semantic and graph-based document retrieval by integra...

    Added May 30, 2025