longcat-flash-chat by openrouter - AI Model Details, Pricing, and Performance Metrics

meituan
longcat-flash-chat
Try
meituan

longcat-flash-chat

completions
byopenrouter

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.

Released
Aug 29, 2025
Knowledge
Mar 2, 2025
License
MIT
Context
131072
Input
$0.15 / 1M tokens
Output
$0.75 / 1M tokens
Accepts: text
Returns: text

Access longcat-flash-chat through LangDB AI Gateway

Recommended

Integrate with meituan's longcat-flash-chat and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Category Scores

Benchmark Tests

View Other Benchmarks
DROP
79.1
General Knowledge
GPQA
73.2
STEM (Physics, Chemistry, Biology)
MMLU
89.7
General Knowledge
MMLU-Pro
82.7
General Knowledge
HumanEval
88.4
Programming

Code Examples

Integration samples and API usage