
Research has been core to Liquid AI from the start. Today, we’re giving that work a formal name: Liquid Labs, our team driving fundamental breakthroughs in the science of building intelligent, personalized, and adaptive machines.
Founding Research
Our origins trace back to MIT CSAIL, where the foundational work on Liquid Neural Networks established a new class of dynamical, efficient sequence processing architectures. That research became the basis for Liquid Foundation Models, designed for real-world, resource-constrained environments.
Liquid Foundation Models (LFMs) represent a new generation of high-performance, efficient, multimodal foundation models for edge, enterprise, and scientific applications. LFMs are derived from first-principled scientific innovation bridging dynamical systems, efficient sequence processing mechanisms, and hardware-aware co-design algorithms to enable high-performance capabilities and computational efficiency with
- Efficient scaling laws
- Hardware-adaptive designs
- Transparent and analyzable behavior
Ultimately, we are committed to innovate a new frontier of personalized multimodal intelligence, across any device – on the edge or on the cloud.
Our Commitment to Open Science
Liquid Labs reinforces our belief that progress in AI should be transparent, reproducible, and shared. We will continue to define the frontier of AI through innovation, and publish our findings through technical reports, architectural deep dives, ablations, evaluations, and model weights that advance the field of efficient AI.
What Liquid Labs Will Pursue
Liquid Labs will drive forward the frontier of intelligent, personalized, and adaptive machines. We are tackling this grand challenge by pursuing foundational research and new innovations in deep learning across the areas of architecture and model design, training objectives, data generation, algorithmic advancements, and inference optimization.
If this is interesting to you, apply to join our team.
Our Latest Work: The LFM2 Technical Report
Earlier this week, we released our technical report on LFM2, detailing the hybrid architecture, training strategies, and benchmarking that make our latest LFM2 models state-of-the-art in efficiency and speed.