gpt-oss-120b by fireworksai - AI Model Details, Pricing, and Performance Metrics

openai
gpt-oss-120b
openai

gpt-oss-120b

completions
On:fireworksaideepinfraparasail

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

ProviderInputOutput
fireworksai
fireworksai
$0.15 / 1M tokens$0.6 / 1M tokens
deepinfra
deepinfra
$0.09 / 1M tokens$0.45 / 1M tokens
parasail
parasail
$0.15 / 1M tokens$0.6 / 1M tokens
Released
Aug 5, 2025
Knowledge
Feb 6, 2025
Context
131072
Input
$0.15 / 1M tokens
Output
$0.6 / 1M tokens
Capabilities: tools
Accepts: text
Returns: text

Access gpt-oss-120b through LangDB AI Gateway

Recommended

Integrate with openai's gpt-oss-120b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Available from 3 providers
Provider:

Benchmark Tests

View Other Benchmarks
AA Coding Index
50.1
Programming
AAII
57.9
General
AA Math Index
93.4
Mathematics
GPQA
79.2
STEM (Physics, Chemistry, Biology)
HLE
18.5
General Knowledge
LiveCodeBench
63.9
Programming
MMLU-Pro
80.8
General Knowledge
SciCode
36.2
Scientific

Code Examples

Integration samples and API usage