gpt-oss-120b by fireworksai - AI Model Details, Pricing, and Performance Metrics
gpt-oss-120b
completionsgpt-oss-120b
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Access gpt-oss-120b through LangDB AI Gateway
Integrate with openai's gpt-oss-120b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
Metric | AA Coding Index | AAII | GPQA | HLE | LiveCodeBench | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|
Score | 50.1 | 57.9 | 79.2 | 18.5 | 63.9 | 80.8 | 36.2 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from fireworksai