qwen3-235b-a22b by deepinfra - AI Model Details, Pricing, and Performance Metrics

qwen
qwen3-235b-a22b
Try
qwen

qwen3-235b-a22b

completions
bydeepinfra

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

Released
Apr 29, 2025
Knowledge
Oct 31, 2024
License
Apache-2.0
Context
40960
Input
$0.13 / 1M tokens
Output
$0.6 / 1M tokens
Capabilities: tools
Accepts: text
Returns: text

Access qwen3-235b-a22b through LangDB AI Gateway

Recommended

Integrate with qwen's qwen3-235b-a22b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Category Scores

Benchmark Tests

View Other Benchmarks
HLE
4.7
General Knowledge
AIME
32.7
Mathematics
GPQA
61.3
STEM (Physics, Chemistry, Biology)
MMLU
87.8
General Knowledge
SciCode
29.9
Scientific
MATH-500
90.2
Mathematics
MMLU-Pro
76.2
General Knowledge
LiveCodeBench
34.3
Programming
AA Math Index
23.7
Mathematics
AA Coding Index
23.3
Programming
AAII
29.9
General

Code Examples

Integration samples and API usage