kimi-k2-0905 by openrouter - AI Model Details, Pricing, and Performance Metrics

moonshotai
kimi-k2-0905
moonshotai

kimi-k2-0905

completions
byopenrouter

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It supports long-context inference up to 256k tokens, extended from the previous 128k. This update improves agentic coding with higher accuracy and better generalization across scaffolds, and enhances frontend coding with more aesthetic and functional outputs for web, 3D, and related tasks. Kimi K2 is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. It excels across coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) benchmarks. The model is trained with a novel stack incorporating the MuonClip optimizer for stable large-scale MoE training.

Released
Sep 5, 2025
Knowledge
Mar 9, 2025
License
Proprietary
Context
262144
Input
$0.49 / 1M tokens
Output
$1.99 / 1M tokens
Capabilities: tools
Accepts: text
Returns: text

Access kimi-k2-0905 through LangDB AI Gateway

Recommended

Integrate with moonshotai's kimi-k2-0905 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests
Request Volume
Daily API requests
24
Performance (TPS)
Tokens per second
323.79 tokens/s

Category Scores

Benchmark Tests

View Other Benchmarks
AAII
50.4
General
AA Math Index
57.3
Mathematics
GPQA
76.3
STEM (Physics, Chemistry, Biology)
HLE
6.3
General Knowledge
HumanEval
94.5
Programming
LiveCodeBench
61.0
Programming
MMLU
90.2
General Knowledge
MMLU-Pro
82.2
General Knowledge
SciCode
30.7
Scientific

Code Examples

Integration samples and API usage