Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
DeepSeek-R1-Distill-Qwen-1.5B
togetherai
Context131072 tokens
Input Price$0.18 / 1M tokens
Output Price$0.18 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
Metric | ||
---|---|---|
Pricing | ||
Input Price | $0.18 / 1M tokens | $2 / 1M tokens |
Output Price | $0.18 / 1M tokens | $8 / 1M tokens |
Capabilities | ||
Context Window | 131072 tokens | 1047576 tokens |
Capabilities | tools | |
Input type | text | text, image |
Category Scores | ||
Overall Average | 21.1 | 56.8 |
Academia | 21.2 | 54.9 |
Finance | 25.9 | 55.5 |
Marketing | 6.8 | 54.4 |
Maths | 43.2 | 67.5 |
Programming | 6.8 | 41.9 |
Science | 33.8 | 66.5 |
Writing | 10.1 | 39.2 |
Vision | N/A | 74.8 |
Benchmark Tests | ||
AIME | 17.7 | 43.7 |
AA Coding Index | 6.8 | 41.9 |
AAII | 8.6 | 43.4 |
AA Math Index | 43.2 | 67.5 |
GPQA | 33.8 | 66.5 |
HLE | 3.3 | 4.6 |
LiveCodeBench | 7.0 | 45.7 |
MATH-500 | 68.7 | 91.3 |
MMLU | _ | 90.2 |
MMLU-Pro | 26.9 | 80.6 |
MMMU | _ | 74.8 |
SciCode | 6.6 | 38.1 |