Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
llama-3.1-8b-instruct
deepinfra
Context131072 tokens
Input Price$0.01 / 1M tokens
Output Price$0.02 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
| Metric | ||
|---|---|---|
Pricing | ||
| Input Price | $0.01 / 1M tokens | $2 / 1M tokens |
| Output Price | $0.02 / 1M tokens | $8 / 1M tokens |
Capabilities | ||
| Context Window | 131072 tokens | 1047576 tokens |
| Capabilities | tools | tools |
| Input type | text | text, image |
Category Scores | ||
| Overall Average | 19.3 | 52.2 |
| Academia | 22.5 | 54.9 |
| Finance | 10.6 | 39.0 |
| Marketing | 28.4 | 59.0 |
| Maths | 4.3 | 34.7 |
| Programming | 8.5 | 32.2 |
| Science | 33.3 | 66.1 |
| Writing | 27.8 | 56.6 |
| Vision | N/A | 74.8 |
Benchmark Tests | ||
| AIME | 7.7 | 43.7 |
| AA Coding Index | 8.5 | 32.2 |
| AAII | 16.9 | 43.4 |
| AA Math Index | 4.3 | 34.7 |
| DROP | 59.5 | _ |
| GPQA | 28.1 | 66.5 |
| HLE | 5.1 | 4.6 |
| HumanEval | 72.6 | _ |
| LiveCodeBench | 11.6 | 45.7 |
| MATH-500 | 51.9 | 91.3 |
| MMLU | 69.4 | 90.2 |
| MMLU-Pro | 48.0 | 80.6 |
| MMMU | _ | 74.8 |
| SciCode | 13.2 | 38.1 |