gemini-2.5-flash-lite by openrouter - AI Model Details, Pricing, and Performance Metrics
gemini-2.5-flash-lite
completionsgemini-2.5-flash-lite
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Access gemini-2.5-flash-lite through LangDB AI Gateway
Integrate with google's gemini-2.5-flash-lite and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
| Metric | AIME  | AA Coding Index  | AAII  | AA Math Index  | GPQA  | HLE  | LiveCodeBench  | MATH-500  | MMLU-Pro  | MMMU  | SciCode  | 
|---|---|---|---|---|---|---|---|---|---|---|---|
| Score | 50.0  | 19.9  | 30.1  | 35.3  | 64.6  | 3.7  | 40.0  | 92.6  | 72.4  | 72.9  | 17.7  | 
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from openrouter