qwen3-235b-a22b
completions
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.
Input:$0.13 / 1M tokens
Output:$0.6 / 1M tokens
Context:40960 tokens
tools
text
text
Category Rankings
marketing#17
Access qwen3-235b-a22b through LangDB AI Gateway
Recommended
Integrate with qwen's qwen3-235b-a22b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Unified API
Cost Optimization
Enterprise Security
Get Started Now
Free tier available • No credit card required
Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
14 availablefrequency_penalty
-202
include_reasoning
max_tokens
min_p
001
presence_penalty
-201.999
repetition_penalty
012
response_format
seed
stop
temperature
012
tool_choice
tools
top_k
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads5
Popular Models10