lfm-2-24b-a2b by openrouter - AI Model Details, Pricing, and Performance Metrics
lfm-2-24b-a2b
completionslfm-2-24b-a2b
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
Access lfm-2-24b-a2b through LangDB AI Gateway
Integrate with liquid's lfm-2-24b-a2b and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.
Free tier available • No credit card required
Code Examples
Integration samples and API usage
Related Models
Similar models from openrouter