parasail

deepseek-chat-v3-0324

completions

DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.

Input:$0.79 / 1M tokens
Output:$1.15 / 1M tokens
Context:163840 tokens
text
text

Access deepseek-chat-v3-0324 through LangDB AI Gateway

Recommended

Integrate with deepseek's deepseek-chat-v3-0324 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
12 available
frequency_penalty
-202
max_tokens
min_p
001
presence_penalty
-201.999
repetition_penalty
012
response_format
seed
stop
structured_outputs
temperature
012
top_k
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Popular Models10
  • openai
    Most capable embedding model for both english and non-english tasks
    Input:$0.13 / 1M tokens
    Output:Free
    Context:8191 tokens
    text
    text
  • openai
    GPT-4o mini (o for omni) is a fast, affordable small model for focused tasks. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is ideal for fine-tuning, and model outputs from a larger model like GPT-4o can be distilled to GPT-4o-mini to produce similar results at lower cost and latency.The knowledge cutoff for GPT-4o-mini models is October, 2023.
    Input:$0.15 / 1M tokens
    Output:$0.6 / 1M tokens
    Context:128K tokens
    tools
    text
    image
    text
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.9 / 1M tokens
    Output:$0.9 / 1M tokens
    Context:163840 tokens
    tools
    text
    text
  • z-ai
    GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses.
    Input:$0.6 / 1M tokens
    Output:$2.2 / 1M tokens
    Context:131072 tokens
    tools
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$3 / 1M tokens
    Output:$8 / 1M tokens
    Context:163840 tokens
    tools
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$0.79 / 1M tokens
    Output:$4 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    DeepSeek-V3
    deepseek
    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek.
    Input:$1.25 / 1M tokens
    Output:$1.25 / 1M tokens
    Context:131072 tokens
    tools
    text
    text
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.79 / 1M tokens
    Output:$1.15 / 1M tokens
    Context:163840 tokens
    text
    text
  • anthropic
    Our high-performance model with exceptional reasoning and efficiency
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text
  • anthropic
    claude-opus-4
    anthropic
    Our most capable and intelligent model yet. Claude Opus 4 sets new standards in complex reasoning and advanced coding
    Input:$15 / 1M tokens
    Output:$75 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text