qwen3-235b-a22b by fireworksai - AI Model Details, Pricing, and Performance Metrics
qwen3-235b-a22b
completionsqwen3-235b-a22b
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.
| Provider | Input | Output | 
|---|---|---|
| $0.22 / 1M tokens | $0.88 / 1M tokens | |
| $0.13 / 1M tokens | $0.6 / 1M tokens | 
| Providers | Context | Input Price | Output Price | Input Formats | Output Formats | License | 
|---|---|---|---|---|---|---|
| 131072 tokens | $0.22 / 1M tokens | $0.88 / 1M tokens | text  | text  | Apache-2.0  | |
| 40960 tokens | $0.13 / 1M tokens | $0.6 / 1M tokens | text  | text  | Apache-2.0  | 
Access qwen3-235b-a22b through LangDB AI Gateway
Integrate with qwen's qwen3-235b-a22b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
| Metric | AIME  | AA Coding Index  | AAII  | AA Math Index  | GPQA  | HLE  | LiveCodeBench  | MATH-500  | MMLU  | MMLU-Pro  | SciCode  | 
|---|---|---|---|---|---|---|---|---|---|---|---|
| Score | 32.7  | 23.3  | 29.9  | 23.7  | 61.3  | 4.7  | 34.3  | 90.2  | 87.8  | 76.2  | 29.9  | 
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from fireworksai