Combines Model Context Protocol with Retrieval-Augmented Generation and web search APIs to deliver an agentic AI system for efficient information retrieval, local and cloud LLM support, and standardized tool invocation.
Unlock the full potential of Model Context Protocol Server through LangDB's AI Gateway. Get enterprise-grade security, analytics, and seamless integration with zero configuration.
Free tier available • No credit card required
A powerful search engine that combines LangChain, Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and Ollama to create an agentic AI system capable of searching the web, retrieving information, and providing relevant answers.
This project integrates several key components:
search-engine-with-rag-and-mcp/
├── LICENSE # MIT License
├── README.md # Project documentation
├── data/ # Data directories
├── docs/ # Documentation
│ └── env_template.md # Environment variables documentation
├── logs/ # Log files directory (auto-created)
├── src/ # Main package (source code)
│ ├── __init__.py
│ ├── core/ # Core functionality
│ │ ├── __init__.py
│ │ ├── main.py # Main entry point
│ │ ├── search.py # Web search module
│ │ ├── rag.py # RAG implementation
│ │ ├── agent.py # LangChain agent
│ │ └── mcp_server.py # MCP server implementation
│ └── utils/ # Utility modules
│ ├── __init__.py
│ ├── env.py # Environment variable loading
│ └── logger.py # Logging configuration
├── pyproject.toml # Poetry configuration
├── requirements.txt # Project dependencies
└── tests/ # Test directory
git clone https://github.com/yourusername/search-engine-with-rag-and-mcp.git cd search-engine-with-rag-and-mcp
# Using pip pip install -r requirements.txt # Or using poetry poetry install
.env
file (use docs/env_template.md as a reference)The application has three main modes of operation:
# Using pip python -m src.core.main "your search query" # Or using poetry poetry run python -m src.core.main "your search query"
python -m src.core.main --agent "your search query"
python -m src.core.main --server
You can also specify custom host and port:
python -m src.core.main --server --host 0.0.0.0 --port 8080
To use Ollama for local embeddings and LLM capabilities:
ollama pull mistral:latest
.env
file:OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=mistral:latest
This project follows these best practices:
This project is licensed under the MIT License - see the LICENSE file for details.
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!