DeepSeek-R1-Distill-Qwen-14B by togetherai - AI Model Details, Pricing, and Performance Metrics
DeepSeek-R1-Distill-Qwen-14B
completionsOn:togetheraiopenrouter
DeepSeek-R1-Distill-Qwen-14B
completions
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
Provider | Input | Output |
---|---|---|
$1.6 / 1M tokens | $1.6 / 1M tokens | |
$0.88 / 1M tokens | $0.88 / 1M tokens |
Released
Jan 20, 2025
Knowledge
Jul 24, 2024
Context
131072
Input
$1.6 / 1M tokens
Output
$1.6 / 1M tokens
Accepts: text
Returns: text
Providers | Context | Input Price | Output Price | Input Formats | Output Formats | License |
---|---|---|---|---|---|---|
131072 tokens | $1.6 / 1M tokens | $1.6 / 1M tokens | text | text | MIT | |
64K tokens | $0.88 / 1M tokens | $0.88 / 1M tokens | text | text | MIT |
Released Jan 20, 2025Knowledge Cutoff: Jul 24, 2024
Access DeepSeek-R1-Distill-Qwen-14B through LangDB AI Gateway
Recommended
Integrate with deepseek's DeepSeek-R1-Distill-Qwen-14B and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Unified API
Cost Optimization
Enterprise Security
Get Started Now
Free tier available • No credit card required
Instant Setup
99.9% Uptime
10,000+Monthly Requests
Available from 2 providers
Provider:
Category Scores
Benchmark Tests
AIME
66.7
Mathematics
AAII
29.7
General
AA Math Index
55.7
Mathematics
GPQA
59.1
STEM (Physics, Chemistry, Biology)
HLE
4.4
General Knowledge
LiveCodeBench
37.6
Programming
MATH-500
94.9
Mathematics
MMLU-Pro
74.0
General Knowledge
SciCode
23.9
Scientific
Metric | AIME | AAII | AA Math Index | GPQA | HLE | LiveCodeBench | MATH-500 | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|---|---|
Score | 66.7 | 29.7 | 55.7 | 59.1 | 4.4 | 37.6 | 94.9 | 74.0 | 23.9 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from togetherai