llama-3.3-70b-instruct by deepinfra - AI Model Details, Pricing, and Performance Metrics
llama-3.3-70b-instruct
completionsThe Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. [Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. [Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)
Provider | Input | Output |
---|---|---|
$0.04 / 1M tokens | $0.12 / 1M tokens | |
$0.15 / 1M tokens | $0.5 / 1M tokens |
Providers | Context | Input Price | Output Price | Input Formats | Output Formats | License |
---|---|---|---|---|---|---|
131072 tokens | $0.04 / 1M tokens | $0.12 / 1M tokens | text | text | llama_3_3_community_license_agreement | |
131072 tokens | $0.15 / 1M tokens | $0.5 / 1M tokens | text | text | llama_3_3_community_license_agreement |
Access llama-3.3-70b-instruct through LangDB AI Gateway
Integrate with meta's llama-3.3-70b-instruct and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
Metric | AIME | AA Coding Index | AAII | AA Math Index | GPQA | HLE | HumanEval | LiveCodeBench | MATH-500 | MMLU | MMLU-Pro | SciCode |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Score | 30.0 | 27.4 | 27.9 | 7.7 | 50.2 | 4.0 | 88.4 | 28.8 | 77.3 | 86.0 | 70.1 | 26.0 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from deepinfra