mixtral-8x7b-instruct
completions
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
Input:$0.08 / 1M tokens
Output:$0.24 / 1M tokens
Context:32768 tokens
tools
text
text
Access mixtral-8x7b-instruct through LangDB AI Gateway
Recommended
Integrate with mistralai's mixtral-8x7b-instruct and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Unified API
Cost Optimization
Enterprise Security
Get Started Now
Free tier available • No credit card required
Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
13 availablefrequency_penalty
-202
max_tokens
min_p
001
presence_penalty
-201.999
repetition_penalty
012
response_format
seed
stop
temperature
012
tool_choice
tools
top_k
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads0
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!
Share your threads to help others
Popular Models10