At Liquid AI, we believe that scientific progress accelerates when knowledge is shared openly. Today, we're pleased to release the full technical report for LFM2, our second generation of Liquid Foundation Models.

Why we share: Building efficient, capable AI models for edge devices requires solving hard problems, from architecture design to training methodologies to multimodal integration. Rather than keeping these learnings proprietary, we've documented our journey in detail.

The LFM2 technical report covers:

  • Hardware-in-the-loop architecture search
  • Novel training objectives for knowledge distillation
  • Post-training recipes for small models
  • Vision-language capabilities (LFM2-VL)
  • Speech processing (LFM2-Audio)
  • Information retrieval (LFM2-ColBERT)

Open Weights = Open Knowledge: Alongside the technical report, all LFM2 models are released with open weights on Hugging Face, complete with deployment guides for ExecuTorch, llama.cpp, and vLLM.

We hope this work serves as a useful resource for researchers, engineers, and practitioners building the next generation of AI systems.

Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required