Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
llama-4-scout
deepinfra
Context327680 tokens
Input Price$0.08 / 1M tokens
Output Price$0.3 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
Metric | ||
---|---|---|
Pricing | ||
Input Price | $0.08 / 1M tokens | $2 / 1M tokens |
Output Price | $0.3 / 1M tokens | $8 / 1M tokens |
Capabilities | ||
Context Window | 327680 tokens | 1047576 tokens |
Capabilities | tools | |
Input type | text, image | text, image |
Category Scores | ||
Overall Average | 46.3 | 56.8 |
Academia | 43.0 | 54.9 |
Finance | 42.3 | 55.5 |
Marketing | 43.9 | 54.4 |
Maths | 56.4 | 67.5 |
Programming | 23.5 | 41.9 |
Science | 57.9 | 66.5 |
Vision | 69.4 | 74.8 |
Writing | 33.6 | 39.2 |
Benchmark Tests | ||
AIME | 28.3 | 43.7 |
AA Coding Index | 23.5 | 41.9 |
AAII | 28.1 | 43.4 |
AA Math Index | 56.4 | 67.5 |
GPQA | 57.9 | 66.5 |
HLE | 4.3 | 4.6 |
LiveCodeBench | 29.9 | 45.7 |
MATH-500 | 84.4 | 91.3 |
MMLU | 79.6 | 90.2 |
MMLU-Pro | 74.8 | 80.6 |
MMMU | 69.4 | 74.8 |
SciCode | 17.0 | 38.1 |