Analyzes Python codebases using AST to extract and vectorize code elements, enabling advanced querying, semantic search, visualization, and natural language Q&A via a Model Context Protocol (MCP) server integrated with LLM-powered embeddings and background refinement.
Unlock the full potential of Python Codebase Analysis RAG System through LangDB's AI Gateway. Get enterprise-grade security, analytics, and seamless integration with zero configuration.
Free tier available • No credit card required
Gemini model for embeddings
models/embedding-001
Limit for semantic search results
5
Gemini model for text generation
models/gemini-pro
Weaviate HTTP port
8080
Max concurrent background LLM tasks (embeddings/descriptions/refinements)
5
Batch size for Weaviate operations
100
Weaviate gRPC port
50051
Your Gemini API key
Distance threshold for semantic search
0.7
Set to true to enable background LLM description generation and refinement
true
Weaviate host address
localhost
File watcher polling interval in seconds
5
Security Notice
Your environment variables and credentials are securely stored and encrypted. LangDB never shares these configuration values with third parties.
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!