Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
claude-3-opus-20240229
anthropic
Context200K tokens
Input Price$5 / 1M tokens
Output Price$75 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
Metric | ||
---|---|---|
Pricing | ||
Input Price | $5 / 1M tokens | $3 / 1M tokens |
Output Price | $75 / 1M tokens | $15 / 1M tokens |
Capabilities | ||
Context Window | 200K tokens | 256K tokens |
Capabilities | tools | tools |
Input type | text | text |
Category Scores | ||
Overall Average | 36.0 | 71.9 |
Academia | 36.7 | 77.5 |
Finance | 28.7 | 82.1 |
Marketing | 44.8 | 53.0 |
Maths | 33.7 | 96.7 |
Programming | 25.6 | 63.8 |
Science | 49.6 | 87.6 |
Writing | 33.1 | 42.5 |
Benchmark Tests | ||
AIME | 3.3 | 94.3 |
AA Coding Index | 25.6 | 63.8 |
AAII | 23.7 | 67.5 |
AA Math Index | 33.7 | 96.7 |
DROP | 83.1 | _ |
GPQA | 49.6 | 87.6 |
HLE | 3.1 | 23.9 |
HumanEval | 84.9 | _ |
LiveCodeBench | 27.9 | 81.9 |
MATH-500 | 64.1 | 99.0 |
MMLU | 86.8 | _ |
MMLU-Pro | 69.0 | 86.6 |
SciCode | 23.3 | 45.7 |