Back to providers page
Try this model
openrouter

llama-4-maverick:free

completions

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Input:Free
Output:Free
Context:256K tokens
text
image
text

Access llama-4-maverick:free through LangDB AI Gateway

Recommended

Integrate with meta-llama's llama-4-maverick:free and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
13 available
frequency_penalty
-202
logit_bias
logprobs
max_tokens
min_p
001
presence_penalty
-201.999
repetition_penalty
012
seed
stop
temperature
012
top_k
top_logprobs
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads0

Discover shared experiences

Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!

Share your threads to help others
Popular Models10
  • anthropic
    claude-sonnet-4
    anthropic
    Our high-performance model with exceptional reasoning and efficiency
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text
  • anthropic
    claude-opus-4
    anthropic
    Our most capable and intelligent model yet. Claude Opus 4 sets new standards in complex reasoning and advanced coding
    Input:$15 / 1M tokens
    Output:$75 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text
  • openai
    gpt-4.1
    openai
    GPT-4.1 is OpenAI's flagship model for complex tasks. It is well suited for problem solving across domains.
    Input:$2 / 1M tokens
    Output:$8 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text
  • gemini
    gemini-2.5-pro-preview
    gemini
    Gemini 2.5 Pro Experimental is Google's state-of-the-art thinking model, capable of reasoning over complex problems in code, math, and STEM, as well as analyzing large datasets, codebases, and documents using long context.
    Input:$1.25 / 1M tokens
    Output:$10 / 1M tokens
    Context:1M tokens
    tools
    text
    image
    audio
    video
    text
  • gemini
    gemini-2.5-flash-preview
    gemini
    Google's best model in terms of price-performance, offering well-rounded capabilities. Gemini 2.5 Flash rate limits are more restricted since it is an experimental / preview model.
    Input:$0.15 / 1M tokens
    Output:$0.6 / 1M tokens
    Context:1M tokens
    tools
    text
    image
    audio
    video
    text
  • gemini
    gemini-2.0-flash
    gemini
    Google's most capable multi-modal model with great performance across all tasks, with a 1 million token context window, and built for the era of Agents.
    Input:$0.1 / 1M tokens
    Output:$0.4 / 1M tokens
    Context:1M tokens
    tools
    text
    image
    audio
    video
    text
  • anthropic
    claude-3.7-sonnet
    anthropic
    Intelligent model, with visible step‑by‑step reasoning
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    text
    image
  • gemini
    gemini-2.0-flash-lite
    gemini
    Google's smallest and most cost effective model, built for at scale usage.
    Input:$0.07 / 1M tokens
    Output:$0.3 / 1M tokens
    Context:1M tokens
    text
    image
    audio
    video
    text
  • openai
    gpt-4.1-mini
    openai
    GPT-4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.
    Input:$0.4 / 1M tokens
    Output:$1.6 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text
  • openai
    gpt-4.1-nano
    openai
    GPT-4.1 nano is the fastest, most cost-effective GPT-4.1 model.
    Input:$0.1 / 1M tokens
    Output:$0.4 / 1M tokens
    Context:1047576 tokens
    tools
    text
    image
    text

Related AI Model Resources

Explore more AI models, providers, and integration options:

  • Browse All AI Models
  • AI Providers Directory
  • More from openrouter
  • MCP Servers
  • Integration Documentation
  • Pricing & Plans
  • AI Industry Blog