lfm-2-24b-a2b by openrouter - AI Model Details, Pricing, and Performance Metrics

liquid
lfm-2-24b-a2b
Try
liquid

lfm-2-24b-a2b

completions
byopenrouter

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

Context
32768
Input
$0.03 / 1M tokens
Output
$0.12 / 1M tokens
Accepts: text
Returns: text

Access lfm-2-24b-a2b through LangDB AI Gateway

Recommended

Integrate with liquid's lfm-2-24b-a2b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Code Examples

Integration samples and API usage