A Model Context Protocol server that enhances LLM capabilities by connecting to Wikipedia, internet search (Tavily), and financial data (Yahoo Finance) tools, enabling contextual responses to user queries.
This project implements an Agentic AI system that connects a Groq-hosted LLM (qwen-qwq-32b model) with various tools through a custom Model Context Protocol (MCP) server. The system enhances the LLM's capabilities by providing contextual information from Wikipedia, internet search (via Tavily API), and financial data (via Yahoo Finance API).
The Model Context Protocol (MCP) is an open standard developed by Anthropic to standardize how applications provide context to large language models (LLMs). It facilitates seamless integration between LLM applications and external data sources and tools, allowing AI systems to interact dynamically with various services through a standardized interface.
Key Features of MCP:
Security: Incorporates host-mediated authentication and supports secure transport protocols. By adopting MCP, developers can build AI applications that are more interoperable, secure, and capable of complex workflows.
To add new tools to the MCP server:
Before you begin, ensure you have the following:
Clone the repository:
git clone https://github.com/dev484p/AgenticAI_MCP
cd AgenticAI_MCP
Install dependencies:
uv add "mcp[cli]"
Set up your environment variables: Update Groq and Tavily api key in keys.json
Optional (To run the server with the MCP Inspector for development):
uv run mcp dev server.py
uv run client.py
The system provides three tools through the MCP server:
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!