deepseek-r1-distill-qwen-32b by openrouter - AI Model Details, Pricing, and Performance Metrics

deepseek
deepseek-r1-distill-qwen-32b
deepseek

deepseek-r1-distill-qwen-32b

completions
byopenrouter

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Released
Jan 20, 2025
Knowledge
Jul 24, 2024
License
MIT
Context
131072
Input
$0.29 / 1M tokens
Output
$1.78 / 1M tokens
Accepts: text
Returns: text

Access deepseek-r1-distill-qwen-32b through LangDB AI Gateway

Recommended

Integrate with deepseek's deepseek-r1-distill-qwen-32b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Category Scores

Benchmark Tests

View Other Benchmarks
AIME
68.7
Mathematics
AA Coding Index
32.3
Programming
AAII
32.7
General
AA Math Index
63.0
Mathematics
GPQA
61.8
STEM (Physics, Chemistry, Biology)
HLE
5.5
General Knowledge
LiveCodeBench
27.0
Programming
MATH-500
94.1
Mathematics
MMLU-Pro
73.9
General Knowledge
SciCode
37.6
Scientific

Code Examples

Integration samples and API usage