openrouter

mistral-medium-3

completions

Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases. The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.

Input:$0.4 / 1M tokens
Output:$2 / 1M tokens
Context:131072 tokens
tools
text
image
text

Access mistral-medium-3 through LangDB AI Gateway

Recommended

Integrate with mistralai's mistral-medium-3 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
11 available
frequency_penalty
-202
max_tokens
presence_penalty
-201.999
response_format
seed
stop
structured_outputs
temperature
012
tool_choice
tools
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Popular Models10
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.25 / 1M tokens
    Output:$0.85 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • gemini
    Google's most capable multi-modal model with great performance across all tasks, with a 1 million token context window, and built for the era of Agents.
    Input:$0.1 / 1M tokens
    Output:$0.4 / 1M tokens
    Context:1M tokens
    tools
    text
    image
    audio
    video
    text
  • deepseek
    deepseek-r1
    deepseek
    DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely!
    Input:$0.4 / 1M tokens
    Output:$2 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    DeepSeek-Reasoner is an advanced AI model designed to enhance logical reasoning and problem-solving capabilities, leveraging deep learning techniques to provide accurate and contextually relevant insights across various domains.
    Input:$0.55 / 1M tokens
    Output:$2.19 / 1M tokens
    Context:64K tokens
    text
    text
  • deepseek
    DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models. For model details, please visit [the DeepSeek-V3 repo](https://github.com/deepseek-ai/DeepSeek-V3) for more information, or see the [launch announcement](https://api-docs.deepseek.com/news/news1226).
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • qwen
    Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.
    Input:$0.12 / 1M tokens
    Output:$0.12 / 1M tokens
    Context:262144 tokens
    text
    text
  • deepseek
    DeepSeek-Chat is an advanced conversational AI model designed to provide intelligent
    Input:$0.14 / 1M tokens
    Output:$0.28 / 1M tokens
    Context:64K tokens
    tools
    text
    text
  • anthropic
    Intelligent model, with visible step‑by‑step reasoning
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    text
    image
  • anthropic
    Our high-performance model with exceptional reasoning and efficiency
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text