Back to GalleryBack
Model Comparison
Model Comparison
Compare performance, benchmarks, and characteristics
claude-3-5-sonnet-20240620
anthropic
Context200K tokens
Input Price$3 / 1M tokens
Output Price$15 / 1M tokens
Loading comparison...
Compare performance, benchmarks, and characteristics
Loading comparison...
| Metric | ||
|---|---|---|
Pricing | ||
| Input Price | $2 / 1M tokens | $3 / 1M tokens |
| Output Price | $8 / 1M tokens | $15 / 1M tokens |
Capabilities | ||
| Context Window | 1047576 tokens | 200K tokens |
| Capabilities | tools | tools |
| Input type | text, image | text, image |
Category Scores | ||
| Overall Average | 52.2 | 51.1 |
| Academia | 54.9 | 44.8 |
| Finance | 39.0 | N/A |
| Marketing | 59.0 | 63.2 |
| Maths | 34.7 | N/A |
| Programming | 32.2 | 30.2 |
| Science | 66.1 | 60.3 |
| Vision | 74.8 | N/A |
| Writing | 56.6 | 57.3 |
Benchmark Tests | ||
| AIME | 43.7 | 15.7 |
| AA Coding Index | 32.2 | 30.2 |
| AAII | 43.4 | 29.9 |
| AA Math Index | 34.7 | _ |
| DROP | _ | 87.1 |
| GPQA | 66.5 | 59.7 |
| HLE | 4.6 | 3.9 |
| HumanEval | _ | 92.0 |
| LiveCodeBench | 45.7 | 38.1 |
| MATH-500 | 91.3 | 77.1 |
| MMLU-Pro | 80.6 | 77.2 |
| MMMU | 74.8 | _ |
| SciCode | 38.1 | 36.6 |