deepseek-r1-distill-qwen-32b by openrouter - AI Model Details, Pricing, and Performance Metrics
deepseek-r1-distill-qwen-32b
completionsdeepseek-r1-distill-qwen-32b
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
Access deepseek-r1-distill-qwen-32b through LangDB AI Gateway
Integrate with deepseek's deepseek-r1-distill-qwen-32b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
Metric | AIME | AA Coding Index | AAII | AA Math Index | GPQA | HLE | LiveCodeBench | MATH-500 | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|---|---|---|
Score | 68.7 | 32.3 | 32.7 | 63.0 | 61.8 | 5.5 | 27.0 | 94.1 | 73.9 | 37.6 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from openrouter