We’ve redefined what’s possible with our new AI architecture designed for efficiency, speed, and real-world deployment.
The smallest foundation model, ideal for embedded devices. Fine-tunable for specific, narrow use cases.
Vision, voice and predictive AI
Fine-tuning support
A mid-sized model built for smartphone-level AI, balancing power and efficiency. Supports fine-tuning and edge deployment on CPUs.
Vision, voice and predictive AI
Multilingual
Fine-tuning support
A high-performance model, optimized for laptops and on-premise AI, with best-in-class multilingual chat capabilities. Fine-tunable and GPU-ready.
Vision, voice and predictive AI
Multilingual
Fine-tuning support
A powerful enterprise-scale model, built for zero-shot accuracy across cloud and on-prem GPU deployments. Designed for high-throughput AI tasks, but not meant for fine-tuning or CPU deployment.
Vision, voice and predictive AI
Multilingual
Fine-tuning support
Ideal for embedded devices.
Ideal for smartphones.
Ideal for laptops.
Ideal for targeting GPUs for cloud and on-prem.
Yes. Get in touch with our team to license or purchase LFMs from our library of best-in-class models.
LFMs also come with two software stacks for deployment and customization: 1) LFM inference stack and 2) LFM customization stack. We currently prioritize working with clients on enabling edge and on-prem use cases. Connect with our team to learn more about our business model.
Yes. We have built an on-prem LFM customization stack available for purchase to enterprises. LFMs can be rapidly fine-tuned, specialized, and optimized for local, private, safety-critical, and latency-bound use cases for enterprise applications – all within the security of your enterprise firewall.
Read more about our research on various aspects of LFMs here , and follow us on X and LinkedIn.