Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
llama-4-scout
deepinfra
Context327680 tokens
Input Price$0.08 / 1M tokens
Output Price$0.3 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
Metric | ||
---|---|---|
Pricing | ||
Input Price | $0.08 / 1M tokens | $3 / 1M tokens |
Output Price | $0.3 / 1M tokens | $15 / 1M tokens |
Capabilities | ||
Context Window | 327680 tokens | 256K tokens |
Capabilities | tools | |
Input type | text, image | text |
Category Scores | ||
Overall Average | 46.3 | 71.3 |
Academia | 43.0 | 76.4 |
Finance | 42.3 | 81.0 |
Marketing | 43.9 | 51.5 |
Maths | 56.4 | 96.7 |
Programming | 23.5 | 63.8 |
Science | 57.9 | 87.6 |
Vision | 69.4 | N/A |
Writing | 33.6 | 42.2 |
Benchmark Tests | ||
AIME | 28.3 | 94.3 |
AA Coding Index | 23.5 | 63.8 |
AAII | 28.1 | 65.3 |
AA Math Index | 56.4 | 96.7 |
GPQA | 57.9 | 87.6 |
HLE | 4.3 | 23.9 |
LiveCodeBench | 29.9 | 81.9 |
MATH-500 | 84.4 | 99.0 |
MMLU | 79.6 | _ |
MMLU-Pro | 74.8 | 86.6 |
MMMU | 69.4 | _ |
SciCode | 17.0 | 45.7 |