openrouter

magistral-medium-2506

completions

Magistral is Mistral's first reasoning model. It is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financial forecasting to software development and creative storytelling — this model solves multi-step challenges where transparency and precision are critical.

Input:$2 / 1M tokens
Output:$5 / 1M tokens
Context:40960 tokens
tools
text
text

Access magistral-medium-2506 through LangDB AI Gateway

Recommended

Integrate with mistralai's magistral-medium-2506 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
12 available
frequency_penalty
-202
include_reasoning
max_tokens
presence_penalty
-201.999
response_format
seed
stop
structured_outputs
temperature
012
tool_choice
tools
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Popular Models10
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.25 / 1M tokens
    Output:$0.85 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    DeepSeek-Chat is an advanced conversational AI model designed to provide intelligent
    Input:$0.14 / 1M tokens
    Output:$0.28 / 1M tokens
    Context:64K tokens
    tools
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • z-ai
    GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
    Input:$0.2 / 1M tokens
    Output:$0.2 / 1M tokens
    Context:131072 tokens
    tools
    text
    text
  • deepseek
    deepseek-r1
    deepseek
    DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely!
    Input:$0.4 / 1M tokens
    Output:$2 / 1M tokens
    Context:163840 tokens
    text
    text
  • openai
    GPT-4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.
    Input:$0.4 / 1M tokens
    Output:$1.6 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text
  • openai
    gpt-4.1
    openai
    GPT-4.1 is OpenAI's flagship model for complex tasks. It is well suited for problem solving across domains.
    Input:$2 / 1M tokens
    Output:$8 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text
  • anthropic
    Intelligent model, with visible step‑by‑step reasoning
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    text
    image
  • qwen
    Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.
    Input:$0.12 / 1M tokens
    Output:$0.12 / 1M tokens
    Context:262144 tokens
    text
    text
  • openai
    GPT-4.1 nano is the fastest, most cost-effective GPT-4.1 model.
    Input:$0.1 / 1M tokens
    Output:$0.4 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text