kimi-k2 by parasail - AI Model Details, Pricing, and Performance Metrics
kimi-k2
completionskimi-k2
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training.
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training.
| Provider | Input | Output | Cached | 
|---|---|---|---|
| $0.99 / 1M tokens | $2.99 / 1M tokens | - | |
| $1 / 1M tokens | $3 / 1M tokens | $0.5 / 1M tokens | 
| Providers | Context | Input Price | Cached | Output Price | Input Formats | Output Formats | License | 
|---|---|---|---|---|---|---|---|
| 131072 tokens | $0.99 / 1M tokens | _ | $2.99 / 1M tokens | text  | text  | Modified MIT License  | |
| 131072 tokens | $1 / 1M tokens | $0.5 / 1M tokens | $3 / 1M tokens | text  | text  | Proprietary  | 
Access kimi-k2 through LangDB AI Gateway
Integrate with moonshotai's kimi-k2 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Category Scores
Benchmark Tests
| Metric | AA Coding Index  | AAII  | AA Math Index  | GPQA  | HLE  | HumanEval  | LiveCodeBench  | MMLU  | MMLU-Pro  | SciCode  | 
|---|---|---|---|---|---|---|---|---|---|---|
| Score | 38.1  | 50.4  | 57.3  | 76.3  | 6.3  | 94.5  | 61.0  | 90.2  | 82.2  | 30.7  | 
Compare with Similar Models
Code Examples
Integration samples and API usage
Related Models
Similar models from parasail