Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
llama-3.1-405b-instruct
deepinfra
Context32768 tokens
Input Price$0.8 / 1M tokens
Output Price$0.8 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
Metric | ||
---|---|---|
Pricing | ||
Input Price | $0.8 / 1M tokens | $3 / 1M tokens |
Output Price | $0.8 / 1M tokens | $15 / 1M tokens |
Capabilities | ||
Context Window | 32768 tokens | 256K tokens |
Capabilities | tools | tools |
Input type | text | text |
Category Scores | ||
Overall Average | 40.2 | 71.3 |
Academia | 38.4 | 76.4 |
Finance | 35.8 | 81.0 |
Marketing | 46.0 | 51.5 |
Maths | 45.8 | 96.7 |
Programming | 30.2 | 63.8 |
Science | 51.1 | 87.6 |
Writing | 34.2 | 42.2 |
Benchmark Tests | ||
AIME | 21.3 | 94.3 |
AA Coding Index | 30.2 | 63.8 |
AAII | 25.7 | 65.3 |
AA Math Index | 45.8 | 96.7 |
DROP | 84.8 | _ |
GPQA | 51.1 | 87.6 |
HLE | 4.2 | 23.9 |
HumanEval | 89.0 | _ |
LiveCodeBench | 30.5 | 81.9 |
MATH-500 | 70.3 | 99.0 |
MMLU | 87.3 | _ |
MMLU-Pro | 73.3 | 86.6 |
SciCode | 29.9 | 45.7 |