cogito-v2-preview-llama-109b-moe by openrouter - AI Model Details, Pricing, and Performance Metrics

deepcogito
cogito-v2-preview-llama-109b-moe
deepcogito

cogito-v2-preview-llama-109b-moe

completions
byopenrouter

An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated Distillation & Amplification (IDA). It targets coding, STEM, instruction following, and general helpfulness, with stronger multilingual, tool-calling, and reasoning performance than size-equivalent baselines. The model supports long-context use (up to 10M tokens) and standard Transformers workflows. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)

Context
32767
Input
$0.18 / 1M tokens
Output
$0.59 / 1M tokens
Capabilities: tools
Accepts: text, image
Returns: text

Access cogito-v2-preview-llama-109b-moe through LangDB AI Gateway

Recommended

Integrate with deepcogito's cogito-v2-preview-llama-109b-moe and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Code Examples

Integration samples and API usage