Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
claude-opus-4
anthropic
Context200K tokens
Input Price$15 / 1M tokens
Output Price$75 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
| Metric | ||
|---|---|---|
Pricing  | ||
| Input Price | $2 / 1M tokens | $15 / 1M tokens | 
| Output Price | $10 / 1M tokens | $75 / 1M tokens | 
Capabilities  | ||
| Context Window | 131072 tokens | 200K tokens | 
| Capabilities | tools | tools | 
| Input type | text | text, image | 
Category Scores  | ||
| Overall Average | 66.3 | 51.1 | 
| Science | 64.7 | 69.1 | 
| Vision | 66.1 | N/A | 
| Writing | 68.2 | 52.4 | 
| Academia | N/A | 56.2 | 
| Finance | N/A | 39.3 | 
| Marketing | N/A | 53.0 | 
| Maths | N/A | 36.3 | 
Benchmark Tests  | ||
| AIME | _ | 56.3 | 
| AAII | _ | 42.3 | 
| AA Math Index | _ | 36.3 | 
| GPQA | 56.0 | 70.1 | 
| HLE | _ | 5.9 | 
| HumanEval | 88.4 | _ | 
| LiveCodeBench | _ | 54.2 | 
| MATH-500 | _ | 94.1 | 
| MMLU | 87.5 | _ | 
| MMLU-Pro | 75.5 | 86.0 | 
| MMMU | 66.1 | _ | 
| SciCode | _ | 40.9 |