gpt-oss-20b by fireworksai - AI Model Details, Pricing, and Performance Metrics
gpt-oss-20b
completionsgpt-oss-20b
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
Access gpt-oss-20b through LangDB AI Gateway
Integrate with openai's gpt-oss-20b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
Metric | AA Coding Index | AAII | GPQA | HLE | LiveCodeBench | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|
Score | 53.7 | 44.8 | 71.5 | 8.5 | 72.1 | 73.6 | 35.4 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from fireworksai