openrouter

gemma-3-27b-it

completions

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 27B is Google's latest open source model, successor to [Gemma 2](google/gemma-2-27b-it)

Input:$0.09 / 1M tokens
Output:$0.17 / 1M tokens
Context:131072 tokens
text
image
text

Access gemma-3-27b-it through LangDB AI Gateway

Recommended

Integrate with google's gemma-3-27b-it and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Code Example
Configuration
Base URL
API Keys
Headers
Project ID in header
X-Run-Id
X-Thread-Id
Model Parameters
11 available
frequency_penalty
-202
max_tokens
min_p
001
presence_penalty
-201.999
repetition_penalty
012
response_format
seed
stop
temperature
012
top_k
top_p
011
Additional Configuration
Tools
Guards
User:
Id:
Name:
Tags:
Publicly Shared Threads5
  • openrouter
    Python function to find indices of two numbers summing to target in a list with exactly one solution.
    python two sum
    two sum solution
    find indices sum target
    leetcode two sum problem
  • openrouter
    A detailed 3-step weighing strategy to identify the single lighter counterfeit coin among 12 using a balance scale in only three weighings.
    12 coins counterfeit puzzle
    3 weighings balance scale strategy
    identify lighter fake coin
    coin weighing logic puzzle
  • openrouter
    Instruction and complete HTML/CSS code for a fade-effect image slider with three images controlled by radio inputs and styled navigation labels.
    html css image slider
    pure css image carousel
    css image fade transition
    radio button image toggle
  • openrouter
    Solved ages of siblings Emily (16) and Derek (8) using algebra and verified conditions from past age relations.
    age algebra problem
    sibling age determination
    solving age equations
    age word problem verification
  • openrouter
    Solved 3x + 5y = 100 by finding particular solution (200, -100), then general form x=200+5t, y=-100-3t; listed all non-negative integer solutions.
    diophantine equation
    integer solutions
    3x plus 5y equals 100
    linear diophantine methods
Popular Models10
  • deepseek
    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.
    Input:$0.25 / 1M tokens
    Output:$0.85 / 1M tokens
    Context:163840 tokens
    text
    text
  • deepseek
    May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
    Input:$0.27 / 1M tokens
    Output:$0.27 / 1M tokens
    Context:163840 tokens
    text
    text
  • gemini
    Highest intelligence Gemini 1.5 series model, with a breakthrough 2 million token context window.
    Input:$1.25 / 1M tokens
    Output:$5 / 1M tokens
    Context:2M tokens
    tools
    text
    image
    audio
    video
    text
  • qwen
    Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.
    Input:$0.12 / 1M tokens
    Output:$0.12 / 1M tokens
    Context:262144 tokens
    text
    text
  • deepseek
    DeepSeek-Chat is an advanced conversational AI model designed to provide intelligent
    Input:$0.14 / 1M tokens
    Output:$0.28 / 1M tokens
    Context:64K tokens
    tools
    text
    text
  • deepseek
    deepseek-r1
    deepseek
    DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely!
    Input:$0.4 / 1M tokens
    Output:$2 / 1M tokens
    Context:163840 tokens
    text
    text
  • anthropic
    Our high-performance model with exceptional reasoning and efficiency
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    image
    text
  • anthropic
    Intelligent model, with visible step‑by‑step reasoning
    Input:$3 / 1M tokens
    Output:$15 / 1M tokens
    Context:200K tokens
    tools
    text
    text
    image
  • openai
    GPT-4o mini (o for omni) is a fast, affordable small model for focused tasks. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is ideal for fine-tuning, and model outputs from a larger model like GPT-4o can be distilled to GPT-4o-mini to produce similar results at lower cost and latency.The knowledge cutoff for GPT-4o-mini models is October, 2023.
    Input:$0.15 / 1M tokens
    Output:$0.6 / 1M tokens
    Context:128K tokens
    tools
    text
    image
    text
  • openai
    gpt-4o
    openai
    High-intelligence flagship model for complex, multi-step tasks. GPT-4o is cheaper and faster than GPT-4 Turbo. It is multimodal (accepting text or image inputs and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance across non-English languages of any of our models. GPT-4o is available in the OpenAI API to paying customers.
    Input:$2.5 / 1M tokens
    Output:$10 / 1M tokens
    Context:128K tokens
    tools
    text
    image
    text