This is a Model Context Protocol (MCP) server implemented in Go, providing a tool to analyze Go pprof performance profiles.
简体中文 | English
This is a Model Context Protocol (MCP) server implemented in Go, providing a tool to analyze Go pprof performance profiles.
analyze_pprof
Tool:
cpu
: Analyzes CPU time consumption during code execution to find hot spots.heap
: Analyzes the current memory usage (heap allocations) to find objects and functions with high memory consumption. Enhanced with object count, allocation site, and type information.goroutine
: Displays stack traces of all current goroutines, used for diagnosing deadlocks, leaks, or excessive goroutine usage.allocs
: Analyzes memory allocations (including freed ones) during program execution to locate code with frequent allocations. Provides detailed allocation site and object count information.mutex
: Analyzes contention on mutexes to find locks causing blocking. (Not yet implemented)block
: Analyzes operations causing goroutine blocking (e.g., channel waits, system calls). (Not yet implemented)text
, markdown
, json
(Top N list), flamegraph-json
(hierarchical flame graph data, default).
text
, markdown
: Human-readable text or Markdown format.json
: Outputs Top N results in structured JSON format (implemented for cpu
, heap
, goroutine
, allocs
).flamegraph-json
: Outputs hierarchical flame graph data in JSON format, compatible with d3-flame-graph (implemented for cpu
, heap
, allocs
, default format). Output is compact.top_n
, defaults to 5, effective for text
, markdown
, json
formats).generate_flamegraph
Tool:
go tool pprof
to generate a flame graph (SVG format) for the specified pprof file, saves it to the specified path, and returns the path and SVG content.cpu
, heap
, allocs
, goroutine
, mutex
, block
.open_interactive_pprof
Tool (macOS Only):
go tool pprof
interactive web UI in the background for the specified pprof file. Uses port :8081
by default if http_address
is not provided.pprof
process upon successful launch.go
command to be available in the system's PATH.pprof
process are not captured by the server. Temporary files downloaded from remote URLs are not automatically cleaned up until the process is terminated (either manually via disconnect_pprof_session
or when the MCP server exits).detect_memory_leaks
Tool:
disconnect_pprof_session
Tool:
pprof
process previously started by open_interactive_pprof
, using its PID.You can install this package directly using go install
:
go install github.com/ZephyrDeng/pprof-analyzer-mcp@latest
This will install the pprof-analyzer-mcp
executable to your $GOPATH/bin
or $HOME/go/bin
directory. Ensure this directory is in your system's PATH to run the command directly.
Ensure you have a Go environment installed (Go 1.18 or higher recommended).
In the project root directory (pprof-analyzer-mcp
), run:
go build
This will generate an executable file named pprof-analyzer-mcp
(or pprof-analyzer-mcp.exe
on Windows) in the current directory.
go install
(Recommended)You can also use go install
to install the executable into your $GOPATH/bin
or $HOME/go/bin
directory. This allows you to run pprof-analyzer-mcp
directly from the command line (if the directory is added to your system's PATH environment variable).
# Installs the executable using the module path defined in go.mod go install . # Or directly using the GitHub path (recommended after publishing) # go install github.com/ZephyrDeng/pprof-analyzer-mcp@latest
Using Docker is a convenient way to run the server, as it bundles the necessary Graphviz dependency.
Build the Docker Image:
In the project root directory (where the Dockerfile
is located), run:
docker build -t pprof-analyzer-mcp .
Run the Docker Container:
docker run -i --rm pprof-analyzer-mcp
-i
flag keeps STDIN open, which is required for the stdio transport used by this MCP server.--rm
flag automatically removes the container when it exits.Configure MCP Client for Docker:
To connect your MCP client (like Roo Cline) to the server running inside Docker, update your .roo/mcp.json
:
{ "mcpServers": { "pprof-analyzer-docker": { "command": "docker run -i --rm pprof-analyzer-mcp" } } }
Make sure the pprof-analyzer-mcp
image has been built locally before the client tries to run this command.
This project uses GoReleaser and GitHub Actions to automate the release process. Releases are triggered automatically when a Git tag matching the pattern v*
(e.g., v0.1.0
, v1.2.3
) is pushed to the repository.
Release Steps:
feat: ...
, fix: ...
). This is important for automatic changelog generation.
git add . git commit -m "feat: Add awesome new feature" # or git commit -m "fix: Resolve issue #42"
git push origin main
# Example: Create tag v0.1.0 git tag v0.1.0 # Push the tag to GitHub git push origin v0.1.0
GoReleaser
GitHub Action defined in .github/workflows/release.yml
. This action will:
You can view the release workflow progress in the "Actions" tab of the GitHub repository.
This server uses the stdio
transport protocol. You need to configure it in your MCP client (e.g., Roo Cline extension for VS Code).
Typically, this involves adding the following configuration to the .roo/mcp.json
file in your project root:
{ "mcpServers": { "pprof-analyzer": { "command": "pprof-analyzer-mcp" } } }
Note: Adjust the command
value based on your build method (go build
or go install
) and the actual location of the executable. Ensure the MCP client can find and execute this command.
After configuration, reload or restart your MCP client, and it should automatically connect to the PprofAnalyzer
server.
Graphviz: The generate_flamegraph
tool requires Graphviz to generate SVG flame graphs (the go tool pprof
command calls dot
when generating SVG). Ensure Graphviz is installed on your system and the dot
command is available in your system's PATH environment variable.
Installing Graphviz:
brew install graphviz
sudo apt-get update && sudo apt-get install graphviz
sudo yum install graphviz # or sudo dnf install graphviz
choco install graphviz
Once the server is connected, you can call the analyze_pprof
and generate_flamegraph
tools using file://
, http://
, or https://
URIs for the profile file.
Example: Analyze CPU Profile (Text format, Top 5)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "file:///path/to/your/cpu.pprof", "profile_type": "cpu" } }
Example: Analyze Heap Profile (Markdown format, Top 10)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "file:///path/to/your/heap.pprof", "profile_type": "heap", "top_n": 10, "output_format": "markdown" } }
Example: Analyze Goroutine Profile (Text format, Top 5)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "file:///path/to/your/goroutine.pprof", "profile_type": "goroutine" } }
Example: Generate Flame Graph for CPU Profile
{ "tool_name": "generate_flamegraph", "arguments": { "profile_uri": "file:///path/to/your/cpu.pprof", "profile_type": "cpu", "output_svg_path": "/path/to/save/cpu_flamegraph.svg" } }
Example: Generate Flame Graph for Heap Profile (inuse_space)
{ "tool_name": "generate_flamegraph", "arguments": { "profile_uri": "file:///path/to/your/heap.pprof", "profile_type": "heap", "output_svg_path": "/path/to/save/heap_flamegraph.svg" } }
Example: Analyze CPU Profile (JSON format, Top 3)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "file:///path/to/your/cpu.pprof", "profile_type": "cpu", "top_n": 3, "output_format": "json" } }
Example: Analyze CPU Profile (Default Flame Graph JSON format)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "file:///path/to/your/cpu.pprof", "profile_type": "cpu" // output_format defaults to "flamegraph-json" } }
Example: Analyze Heap Profile (Explicitly Flame Graph JSON format)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "file:///path/to/your/heap.pprof", "profile_type": "heap", "output_format": "flamegraph-json" } }
Example: Analyze Remote CPU Profile (from HTTP URL)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "https://example.com/profiles/cpu.pprof", "profile_type": "cpu" } }
Example: Analyze Online CPU Profile (from GitHub Raw URL)
{ "tool_name": "analyze_pprof", "arguments": { "profile_uri": "https://raw.githubusercontent.com/google/pprof/refs/heads/main/profile/testdata/gobench.cpu", "profile_type": "cpu", "top_n": 5 } }
Example: Generate Flame Graph for Online Heap Profile (from GitHub Raw URL)
{ "tool_name": "generate_flamegraph", "arguments": { "profile_uri": "https://raw.githubusercontent.com/google/pprof/refs/heads/main/profile/testdata/gobench.heap", "profile_type": "heap", "output_svg_path": "./online_heap_flamegraph.svg" } }
Example: Open Interactive Pprof UI for Online CPU Profile (macOS Only)
{ "tool_name": "open_interactive_pprof", "arguments": { "profile_uri": "https://raw.githubusercontent.com/google/pprof/refs/heads/main/profile/testdata/gobench.cpu" // Optional: "http_address": ":8082" // Example of overriding the default port } }
Example: Detect Memory Leaks Between Two Heap Profiles
{ "tool_name": "detect_memory_leaks", "arguments": { "old_profile_uri": "file:///path/to/your/heap_before.pprof", "new_profile_uri": "file:///path/to/your/heap_after.pprof", "threshold": 0.05, // 5% growth threshold "limit": 15 // Show top 15 potential leaks } }
Example: Disconnect a Pprof Session
{ "tool_name": "disconnect_pprof_session", "arguments": { "pid": 12345 // Replace 12345 with the actual PID returned by open_interactive_pprof } }
mutex
, block
profiles.json
output format for mutex
, block
profile types.output_format
.http://
, https://
).allocs
profiles.json
output format for allocs
profile type.Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!