An MCP server that analyzes Python codebases using AST, stores code elements in a vector database, and enables natural language queries about code structure and functionality using RAG with Google's Gemini models.
Gemini model for embeddings
models/embedding-001
Weaviate gRPC port
50051
Max concurrent background LLM tasks (embeddings/descriptions/refinements)
5
Set to true to enable background LLM description generation and refinement
true
Weaviate host address
localhost
Distance threshold for semantic search
0.7
File watcher polling interval in seconds
5
Limit for semantic search results
5
Gemini model for text generation
models/gemini-pro
Your Gemini API key
Weaviate HTTP port
8080
Batch size for Weaviate operations
100
Security Notice
Your environment variables and credentials are securely stored and encrypted. LangDB never shares these configuration values with third parties.
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!