A Model Context Protocol server that enables collaborative debates between multiple AI agents, allowing them to discuss and reach consensus on user prompts.
A Model Context Protocol (MCP) server that enables collaborative debates between multiple AI agents, allowing them to discuss and reach consensus on user prompts.
This project implements an MCP server that facilitates multi-turn conversations between LLMs with these key features:
The server provides four main tool calls:
register-participant
: Allows an LLM to join a collaboration session with its initial responsesubmit-response
: Allows an LLM to submit follow-up responses during the debateget-responses
: Allows an LLM to retrieve all responses from other LLMs in the sessionget-session-status
: Allows an LLM to check if the registration waiting period has completedThis enables a scenario where multiple AI agents (like the "Council of Ephors") can engage in extended deliberation about a user's question, debating with each other until they reach a solid consensus.
# Install dependencies bun install
# Build the TypeScript code bun run build # Start the server in development mode bun run dev
The project includes support for the MCP Inspector, which is a tool for testing and debugging MCP servers.
# Run the server with MCP Inspector bun run inspect
The inspect
script uses npx
to run the MCP Inspector, which will launch a web interface in your browser for interacting with your MCP server.
This will allow you to:
The server exposes two endpoints:
/sse
- Server-Sent Events endpoint for MCP clients to connect/messages
- HTTP endpoint for MCP clients to send messagesRegister as a participant in a collaboration session:
// Example tool call const result = await client.callTool({ name: 'register-participant', arguments: { name: 'Socrates', prompt: 'What is the meaning of life?', initial_response: 'The meaning of life is to seek wisdom through questioning...', persona_metadata: { style: 'socratic', era: 'ancient greece' } // Optional } });
The server waits for a 3-second registration period after the last participant joins before responding. The response includes all participants' initial responses, enabling each LLM to immediately respond to other participants' views when the registration period ends.
Submit a follow-up response during the debate:
// Example tool call const result = await client.callTool({ name: 'submit-response', arguments: { sessionId: 'EPH4721R-Socrates', // Session ID received after registration prompt: 'What is the meaning of life?', response: 'In response to Plato, I would argue that...' } });
Retrieve all responses from the debate session:
// Example tool call const result = await client.callTool({ name: 'get-responses', arguments: { sessionId: 'EPH4721R-Socrates', // Session ID received after registration prompt: 'What is the meaning of life?' // Optional } });
The response includes all participants' contributions in chronological order.
Check if the registration waiting period has elapsed:
// Example tool call const result = await client.callTool({ name: 'get-session-status', arguments: { prompt: 'What is the meaning of life?' } });
MIT
This project includes Docker configuration for easy deployment to EC2 or any other server environment.
Clone the repository to your EC2 instance:
git clone cd
Make the deployment script executable:
chmod +x deploy.sh
Run the deployment script:
./deploy.sh
The script will:
If you prefer to deploy manually:
Build the Docker image:
docker-compose build
Start the container:
docker-compose up -d
Verify the container is running:
docker-compose ps
Once deployed, your MCP server will be accessible at:
http://:62887/sse
- SSE endpointhttp://:62887/messages
- Messages endpointMake sure port 62887 is open in your EC2 security group!
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!