gemini-2.5-flash-lite-preview-09-2025 by openrouter - AI Model Details, Pricing, and Performance Metrics
gemini-2.5-flash-lite-preview-09-2025
completionsgemini-2.5-flash-lite-preview-09-2025
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Access gemini-2.5-flash-lite-preview-09-2025 through LangDB AI Gateway
Integrate with google's gemini-2.5-flash-lite-preview-09-2025 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Statistics
Category Scores
Benchmark Tests
| Metric | HLE | GPQA | SciCode | MMLU-Pro | LiveCodeBench | AA Math Index | AA Coding Index | AAII |
|---|---|---|---|---|---|---|---|---|
| Score | 6.6 | 70.9 | 28.7 | 80.8 | 68.8 | 68.7 | 36.5 | 47.9 |
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from openrouter