qwen3-235b-a22b by fireworksai - AI Model Details, Pricing, and Performance Metrics

qwen
qwen3-235b-a22b
qwen

qwen3-235b-a22b

completions
On:fireworksaideepinfra

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

ProviderInputOutput
fireworksai
fireworksai
$0.22 / 1M tokens$0.88 / 1M tokens
deepinfra
deepinfra
$0.13 / 1M tokens$0.6 / 1M tokens
Released
Apr 29, 2025
Knowledge
Oct 31, 2024
Context
131072
Input
$0.22 / 1M tokens
Output
$0.88 / 1M tokens
Capabilities: tools
Accepts: text
Returns: text

Access qwen3-235b-a22b through LangDB AI Gateway

Recommended

Integrate with qwen's qwen3-235b-a22b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Available from 2 providers
Provider:

Category Scores

Benchmark Tests

View Other Benchmarks
AIME
32.7
Mathematics
AA Coding Index
32.1
Programming
AAII
29.9
General
AA Math Index
23.7
Mathematics
GPQA
61.3
STEM (Physics, Chemistry, Biology)
HLE
4.7
General Knowledge
LiveCodeBench
34.3
Programming
MATH-500
90.2
Mathematics
MMLU
87.8
General Knowledge
MMLU-Pro
76.2
General Knowledge
SciCode
29.9
Scientific

Code Examples

Integration samples and API usage