mistral-small-3 by mistralai - AI Model Details, Pricing, and Performance Metrics
mistral-small-3
completionsMistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)
Access mistral-small-3 through LangDB AI Gateway
Integrate with mistralai's mistral-small-3 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
| Metric | AIME  | AAII  | GPQA  | HLE  | LiveCodeBench  | MATH-500  | MMLU-Pro  | SciCode  | 
|---|---|---|---|---|---|---|---|---|
| Score | 6.3  | 13.0  | 38.1  | 4.3  | 14.1  | 56.3  | 52.9  | 15.6  | 
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from mistralai