mercury-coder by openrouter - AI Model Details, Pricing, and Performance Metrics

inception
mercury-coder
inception

mercury-coder

completions
byopenrouter

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/introducing-mercury).

Context
128K
Input
$0.25 / 1M tokens
Output
$1 / 1M tokens
Capabilities: tools
Accepts: text
Returns: text

Access mercury-coder through LangDB AI Gateway

Recommended

Integrate with inception's mercury-coder and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Code Examples

Integration samples and API usage