A Node.js server that integrates with pytest to facilitate the ModelContextProtocol (MCP) service tools, enabling test execution recording and environment tracking.
We are running the published npm package (@modelcontextprotocol/mcp-pytest-server), not locally compiled source. This is confirmed by:
For reference, the Python SDK releases are available at: https://github.com/modelcontextprotocol/python-sdk/tags
To view the server output and logs:
tail -f ~/workspace/mcp-pytest-server/output.log
less ~/workspace/mcp-pytest-server/output.log cat ~/workspace/mcp-pytest-server/output.log
npm install -g uvx
module.exports = { services: { memory: { command: 'node ~/.npm/_npx/15b07286cbcc3329/node_modules/.bin/mcp-server-memory', autorestart: true, log: 'memory.log', env: { NODE_ENV: 'production' } } } }
uvx start memory
cd ~/workspace/mcp-pytest-server
npm install @modelcontextprotocol/sdk npm install
node index.js
pytest --mcp
To inspect the memory service:
Start the service in debug mode:
npx --node-options='--inspect' @modelcontextprotocol/server-memory
Open Chrome DevTools at chrome://inspect
Click "Open dedicated DevTools for Node"
Set breakpoints and inspect the service's execution
Alternatively, use VSCode's built-in Node.js debugging:
{ "type": "node", "request": "launch", "name": "Debug Memory Service", "runtimeExecutable": "npx", "runtimeArgs": ["@modelcontextprotocol/server-memory"], "args": [], "console": "integratedTerminal" }
To inspect the mcp-pytest service:
Start the service in debug mode:
node --inspect ~/workspace/mcp-pytest-server/index.js
Open Chrome DevTools at chrome://inspect
Click "Open dedicated DevTools for Node"
Set breakpoints and inspect the service's execution
Alternatively, use VSCode's built-in Node.js debugging:
{ "type": "node", "request": "launch", "name": "Debug MCP-Pytest Service", "program": "${workspaceFolder}/index.js", "console": "integratedTerminal" }
The MCP pytest integration consists of multiple components:
The core functionality for all three tools (record_session_start, record_test_outcome, record_session_finish) has been implemented in index.js. The implementation includes:
Implementation Status: The core functionality for all three tools (record_session_start, record_test_outcome, record_session_finish) has been implemented in index.js. The implementation includes:
record_session_start
[IMPLEMENTED]Description:
This tool is called at the beginning of a pytest session. It initializes the context for the current test run by creating or updating the "TestRun_Latest" and "Env_Current" entities in the memory
MCP server. Importantly, this tool also ensures that any data from previous test runs associated with "TestRun_Latest" is cleared to maintain a single source of truth for the last run.
Implementation Details:
Input Schema:
{ "environment": { "os": "string", "python_version": "string" } } **Example Usage:**
mcp call pytest-mcp record_session_start '{"environment": {"os": "Macos", "python_version": "3.13.1"}}'
Expected Behavior:
Clear Previous Data: Deletes the "TestRun_Latest" entity and any relations where "TestRun_Latest" is the from or to entity from the memory MCP server. This ensures no accumulation of historical data.
Create "Env_Current" Entity: Creates an entity named "Env_Current" with the entity type "TestEnvironment" and observations for the operating system and Python version.
Create "TestRun_Latest" Entity: Creates an entity named "TestRun_Latest" with the entity type "TestRun" and an initial observation like "status: running".
Create Relation: Creates a relation of type "ran_on" from "TestRun_Latest" to "Env_Current".
Example Interaction (run in cline window):
use_mcp_tool pytest-mcp record_session_start '{"environment": {"os": "Macos", "python_version": "3.13.1"}}'
## 2. record_test_outcome [IMPLEMENTED]
Description:
This tool is called after each individual test case has finished executing. It records the outcome of the test (passed, failed, skipped), its duration, and any error information if the test failed.
**Implementation Details:**
- Input validation for nodeid, outcome, duration, and optional error
- Basic response generation with test outcome details
- Error handling for invalid parameters
Input Schema:
{ "nodeid": "string", "outcome": "string (passed|failed|skipped)", "duration": "number", "error": "string (optional)" }
Expected Behavior:
Create/Update TestCase Entity: Creates or updates an entity with the name matching the nodeid (e.g., "test_module.py::test_function"), setting its entity type to "TestCase".
Add Outcome Observation: Adds an observation with the format "outcome: " to the TestCase entity.
Add Duration Observation: Adds an observation with the format "duration: " to the TestCase entity.
Add Error Observation (if applicable): If the outcome is "failed" and the error field is provided, add an observation with the format "error: " to the TestCase entity.
Create Relation: Creates a relation of type "contains_test" from "TestRun_Latest" to the TestCase entity.
Example Interaction (run in cline window):
use_mcp_tool pytest-mcp record_test_outcome '{"nodeid": "test_module.py::test_example", "outcome": "passed", "duration": 0.123}' use_mcp_tool pytest-mcp record_test_outcome '{"nodeid": "test_module.py::test_failure", "outcome": "failed", "duration": 0.05, "error": "AssertionError: ... "}'
## 3. record_session_finish [IMPLEMENTED]
Description:
This tool is called at the end of a pytest session. It records summary information about the entire test run, such as the total number of tests, the counts of passed, failed, and skipped tests, and the exit status of the pytest process. It also updates the status of the "TestRun_Latest" entity to "finished".
**Implementation Details:**
- Input validation for summary object
- Basic response generation with session summary
- Error handling for invalid parameters
Input Schema:
{ "summary": { "total_tests": "integer", "passed": "integer", "failed": "integer", "skipped": "integer", "exitstatus": "integer" } }
Expected Behavior:
Update TestRun_Latest Status: Updates the "TestRun_Latest" entity's observation "status: running" to "status: finished".
Add Summary Observations: Adds observations to the "TestRun_Latest" entity for total_tests, passed, failed, skipped, and exitstatus based on the input summary.
Add End Time Observation: Adds an observation with the format "end_time: " to the "TestRun_Latest" entity.
Example Interaction (run in cline window):
use_mcp_tool pytest-mcp record_session_finish '{"summary": {"total_tests": 10, "passed": 7, "failed": 2, "skipped": 1, "exitstatus": 0}}'
## Debugging the service
node ~/workspace/mcp-pytest-server/index.js
ps aux | grep index.js sudo tcpdump -i any -s 0 -w mcp_traffic.pcap port
cline
use_pytest-mcp
#Development
Suggested Optimizations:
## Faster JSON
Use a Faster JSON Library: Replace the built-in json module with orjson for faster parsing and serialization.
import orjson as json
## Dispatch mechanism
Implement a Dispatch Mechanism: Use dictionaries to map request types and tool names to handler functions.
def handle_list_tools(request):
# ...
def handle_record_session_start(args):
# ...
# ... other tool handlers ...
request_handlers = {
"list_tools": handle_list_tools,
"call_tool": {
"record_session_start": handle_record_session_start,
# ... other tools ...
}
}
def handle_request(request):
request_type = request["type"]
handler = request_handlers.get(request_type)
if handler:
if request_type == "call_tool":
tool_name = request["name"]
tool_handler = handler.get(tool_name)
if tool_handler:
tool_handler(request["arguments"])
else:
send_response({"type": "error", "code": -32601, "message": f"Unknown tool: {tool_name}"})
else:
handler(request)
else:
send_response({"type": "error", "code": -32601, "message": f"Unknown request type: {request_type}"})
## Concurrency
Concurrency: Explore using asynchronous programming (e.g., asyncio) or threading to handle multiple requests concurrently. This would require more significant changes to the server's structure.
## Python SDK Implementation Summary
### Current Status
- Python SDK package structure created at ~/workspace/mcp-pytest-server/python-sdk
- Basic package files implemented:
- setup.py with package configuration
- src/mcp/__init__.py with version information
- Package successfully installed in development mode using pip install -e .
- PYTHONPATH configuration verified to allow package import
- Currently running as a development installation with full source access
- Service level: Development/Testing (not production-ready)
### Service Level Details
- **Development Mode**: Running with pip install -e . allows for immediate code changes without reinstallation
- **Source Access**: Full access to source code for debugging and development
- **Dependencies**: Managed through setup.py with direct access to local development environment
- **Stability**: Suitable for testing and development, not recommended for production use
- **Performance**: May include debug logging and unoptimized code paths
### Remaining Tasks
- Implement core MCP client functionality in Python SDK
- Add pytest integration hooks
- Create proper test suite for Python SDK
- Publish package to PyPI for easier distribution
- Optimize for production deployment
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!