anthropic

claude-opus-4.1

completions

Claude Opus 4.1 is an updated version of Anthropic’s flagship model, offering improved performance in coding, reasoning, and agentic tasks. It achieves 74.5% on SWE-bench Verified and shows notable gains in multi-file code refactoring, debugging precision, and detail-oriented reasoning. The model supports extended thinking up to 64K tokens and is optimized for tasks involving research, data analysis, and tool-assisted reasoning.

Input:$15 / 1M tokens$1.5 / 1M tokenscached read$18.75 / 1M tokenscached write
Output:$75 / 1M tokens
Context:200K tokens
tools
text
image
text
Category Rankings
academia#15
finance#19
programming#13
science#16

Access claude-opus-4.1 through LangDB AI Gateway

Recommended

Integrate with anthropic's claude-opus-4.1 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
6 available
include_reasoning
max_tokens
stop
temperature
012
tool_choice
tools
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Popular Models10
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.25 / 1M tokens
    Output:$0.85 / 1M tokens
    Context:163840 tokens
    text
    text
  • qwen
    Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.
    Input:$0.12 / 1M tokens
    Output:$0.12 / 1M tokens
    Context:262144 tokens
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • anthropic
    Intelligent model, with visible step‑by‑step reasoning
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    text
    image
  • aion-labs
    Aion-RP-Llama-3.1-8B ranks the highest in the character evaluation portion of the RPBench-Auto benchmark, a roleplaying-specific variant of Arena-Hard-Auto, where LLMs evaluate each other’s responses. It is a fine-tuned base model rather than an instruct model, designed to produce more natural and varied writing.
    Input:$0.2 / 1M tokens
    Output:$0.2 / 1M tokens
    Context:32768 tokens
    text
    text
  • microsoft
    mai-ds-r1
    microsoft
    MAI-DS-R1 is a post-trained variant of DeepSeek-R1 developed by the Microsoft AI team to improve the model’s responsiveness on previously blocked topics while enhancing its safety profile. Built on top of DeepSeek-R1’s reasoning foundation, it integrates 110k examples from the Tulu-3 SFT dataset and 350k internally curated multilingual safety-alignment samples. The model retains strong reasoning, coding, and problem-solving capabilities, while unblocking a wide range of prompts previously restricted in R1. MAI-DS-R1 demonstrates improved performance on harm mitigation benchmarks and maintains competitive results across general reasoning tasks. It surpasses R1-1776 in satisfaction metrics for blocked queries and reduces leakage in harmful content categories. The model is based on a transformer MoE architecture and is suitable for general-purpose use cases, excluding high-stakes domains such as legal, medical, or autonomous systems.
    Input:$0.3 / 1M tokens
    Output:$0.3 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    deepseek-r1
    deepseek
    DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely!
    Input:$0.4 / 1M tokens
    Output:$2 / 1M tokens
    Context:163840 tokens
    text
    text
  • openai
    gpt-4.1
    openai
    GPT-4.1 is OpenAI's flagship model for complex tasks. It is well suited for problem solving across domains.
    Input:$2 / 1M tokens
    Output:$8 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text
  • nousresearch
    DeepHermes 3 (Mistral 24B Preview) is an instruction-tuned language model by Nous Research based on Mistral-Small-24B, designed for chat, function calling, and advanced multi-turn reasoning. It introduces a dual-mode system that toggles between intuitive chat responses and structured “deep reasoning” mode using special system prompts. Fine-tuned via distillation from R1, it supports structured output (JSON mode) and function call syntax for agent-based applications. DeepHermes 3 supports a **reasoning toggle via system prompt**, allowing users to switch between fast, intuitive responses and deliberate, multi-step reasoning. When activated with the following specific system instruction, the model enters a *"deep thinking"* mode—generating extended chains of thought wrapped in `<think></think>` tags before delivering a final answer. System Prompt: You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
    Input:$0.14 / 1M tokens
    Output:$0.14 / 1M tokens
    Context:32768 tokens
    text
    text
  • anthropic
    Our high-performance model with exceptional reasoning and efficiency
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text