openrouter

qwen-2.5-72b-instruct

completions

Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. - Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. - Long-context Support up to 128K tokens and can generate up to 8K tokens. - Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).

Input:$0.1 / 1M tokens
Output:$0.1 / 1M tokens
Context:32768 tokens
text
text

Access qwen-2.5-72b-instruct through LangDB AI Gateway

Recommended

Integrate with qwen's qwen-2.5-72b-instruct and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
13 available
frequency_penalty
-202
logit_bias
logprobs
max_tokens
min_p
001
presence_penalty
-201.999
repetition_penalty
012
seed
stop
temperature
012
top_k
top_logprobs
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads5
  • openrouter
    Explains how Andrew has N+1 sisters by including Alice among the N sisters and considering family from Andrew's viewpoint.
    counting siblings
    family relationships
    sisters and brothers math
    perspective on siblings
  • openrouter
    Trains A and B meet at 11:17 AM, 137 km east of Station X, accounting for Train A's one-hour head start.
    train meeting time calculation
    relative speed problem
    train head start distance
    train collision point calculation
  • openrouter
    Python code implementing A* algorithm using Manhattan distance to find shortest path on a grid, returning path or empty list if none found.
    a* pathfinding python
    manhattan distance heuristic
    grid shortest path
    pathfinding algorithm example
  • openrouter
    Detailed 3-step weighing plan to find the single lighter counterfeit coin among 12 using 3 weighings, narrowing down groups each step.
    counterfeit coin problem
    balance scale puzzle
    three weighings strategy
    identify lighter fake coin
  • openrouter
    Solved 3x + 5y = 100 by finding one solution (200, -100), deriving general form x=200+5k, y=-100-3k, then listing integer pairs with non-negative x,y.
    diophantine equation solutions
    3x plus 5y equals 100
    integer solutions to linear equations
    general solution diophantine
Popular Models10
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.25 / 1M tokens
    Output:$0.85 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models. For model details, please visit [the DeepSeek-V3 repo](https://github.com/deepseek-ai/DeepSeek-V3) for more information, or see the [launch announcement](https://api-docs.deepseek.com/news/news1226).
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • meta-llama
    Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM—generating text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Llama Guard 4 was aligned to safeguard against the standardized MLCommons hazards taxonomy and designed to support multimodal Llama 4 capabilities. Specifically, it combines features from previous Llama Guard models, providing content moderation for English and multiple supported languages, along with enhanced capabilities to handle mixed text-and-image prompts, including multiple images. Additionally, Llama Guard 4 is integrated into the Llama Moderations API, extending robust safety classification to text and images.
    Input:$0.05 / 1M tokens
    Output:$0.05 / 1M tokens
    Context:163840 tokens
    text
    image
    text
  • qwen
    Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.
    Input:$0.12 / 1M tokens
    Output:$0.12 / 1M tokens
    Context:262144 tokens
    text
    text
  • anthropic
    Intelligent model, with visible step‑by‑step reasoning
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    text
    image
  • openai
    GPT-4o mini (o for omni) is a fast, affordable small model for focused tasks. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is ideal for fine-tuning, and model outputs from a larger model like GPT-4o can be distilled to GPT-4o-mini to produce similar results at lower cost and latency.The knowledge cutoff for GPT-4o-mini models is October, 2023.
    Input:$0.15 / 1M tokens
    Output:$0.6 / 1M tokens
    Context:128K tokens
    tools
    text
    image
    text
  • deepseek
    deepseek-r1
    deepseek
    DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely!
    Input:$0.4 / 1M tokens
    Output:$2 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    DeepSeek-Reasoner is an advanced AI model designed to enhance logical reasoning and problem-solving capabilities, leveraging deep learning techniques to provide accurate and contextually relevant insights across various domains.
    Input:$0.55 / 1M tokens
    Output:$2.19 / 1M tokens
    Context:64K tokens
    text
    text
  • anthropic
    Our high-performance model with exceptional reasoning and efficiency
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text