Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
claude-opus-4
anthropic
Context200K tokens
Input Price$15 / 1M tokens
Output Price$75 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
| Metric | ||
|---|---|---|
Pricing  | ||
| Input Price | $15 / 1M tokens | $15 / 1M tokens | 
| Output Price | $60 / 1M tokens | $75 / 1M tokens | 
Capabilities  | ||
| Context Window | 200K tokens | 200K tokens | 
| Capabilities | tools | tools | 
| Input type | text, image | text, image | 
Category Scores  | ||
| Overall Average | 65.1 | 51.1 | 
| Academia | 61.8 | 56.2 | 
| Marketing | 73.9 | 53.0 | 
| Programming | 38.6 | N/A | 
| Science | 72.3 | 69.1 | 
| Vision | 77.6 | N/A | 
| Writing | 66.5 | 52.4 | 
| Finance | N/A | 39.3 | 
| Maths | N/A | 36.3 | 
Benchmark Tests  | ||
| AIME | 72.3 | 56.3 | 
| AA Coding Index | 38.6 | _ | 
| AAII | 47.2 | 42.3 | 
| AA Math Index | _ | 36.3 | 
| GPQA | 76.4 | 70.1 | 
| HLE | 7.7 | 5.9 | 
| HumanEval | 88.1 | _ | 
| LiveCodeBench | 67.9 | 54.2 | 
| MATH-500 | 97.0 | 94.1 | 
| MMLU | 91.8 | _ | 
| MMLU-Pro | 84.1 | 86.0 | 
| MMMU | 77.6 | _ | 
| SciCode | 35.8 | 40.9 |