Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
claude-opus-4
anthropic
Context200K tokens
Input Price$15 / 1M tokens
Output Price$75 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
| Metric | ||
|---|---|---|
Pricing | ||
| Input Price | $1 / 1M tokens | $15 / 1M tokens |
| Output Price | $3 / 1M tokens | $75 / 1M tokens |
Capabilities | ||
| Context Window | 131072 tokens | 200K tokens |
| Capabilities | tools | tools |
| Input type | text | text, image |
Category Scores | ||
| Overall Average | 56.7 | 51.1 |
| Academia | 63.3 | 56.2 |
| Finance | 53.8 | 39.3 |
| Marketing | 57.6 | 53.0 |
| Maths | 57.3 | 36.3 |
| Programming | 38.1 | N/A |
| Science | 70.8 | 69.1 |
| Writing | 56.1 | 52.4 |
Benchmark Tests | ||
| AIME | _ | 56.3 |
| AA Coding Index | 38.1 | _ |
| AAII | 50.4 | 42.3 |
| AA Math Index | 57.3 | 36.3 |
| GPQA | 76.3 | 70.1 |
| HLE | 6.3 | 5.9 |
| HumanEval | 94.5 | _ |
| LiveCodeBench | 61.0 | 54.2 |
| MATH-500 | _ | 94.1 |
| MMLU | 90.2 | _ |
| MMLU-Pro | 82.2 | 86.0 |
| SciCode | 30.7 | 40.9 |