claude-opus-4.1 by anthropic - AI Model Details, Pricing, and Performance Metrics
claude-opus-4.1
completionsClaude Opus 4.1 is an updated version of Anthropic’s flagship model, offering improved performance in coding, reasoning, and agentic tasks. It achieves 74.5% on SWE-bench Verified and shows notable gains in multi-file code refactoring, debugging precision, and detail-oriented reasoning. The model supports extended thinking up to 64K tokens and is optimized for tasks involving research, data analysis, and tool-assisted reasoning.
Claude Opus 4.1 is an updated version of Anthropic’s flagship model, offering improved performance in coding, reasoning, and agentic tasks. It achieves 74.5% on SWE-bench Verified and shows notable gains in multi-file code refactoring, debugging precision, and detail-oriented reasoning. The model supports extended thinking up to 64K tokens and is optimized for tasks involving research, data analysis, and tool-assisted reasoning.
Access claude-opus-4.1 through LangDB AI Gateway
Integrate with anthropic's claude-opus-4.1 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Statistics
Category Scores
Benchmark Tests
Metric | AIME | AA Coding Index | AAII | AA Math Index | GPQA | HLE | LiveCodeBench | MATH-500 | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|---|---|---|
Score | 56.3 | 47.5 | 42.3 | 36.3 | 79.6 | 5.9 | 54.2 | 94.1 | 86.0 | 40.9 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from anthropic