The fastest multimodal models. Deployable anywhere.

Purpose-built for speed, capability and efficiency. Our multimodal, hybrid models run anywhere you need them and shine at agentic tasks, instruction following, data extraction, and RAG.

On-Device
cloud
hybrid

Text Models

LFMs deliver powerful performance in a lightweight, customizable, and compute-efficient footprint for deployment in any environment.

LFM2-350M

LFM2 -700M

LFM2 -700M

LFM2-8B-A1B

On-Device
cloud
hybrid

Vision-Language Models

Multimodal models using vision and text inputs and outputs with capabilities designed for low latency and device aware deployment.

LFM2-VL-450M

LFM2-VL-1.6B

LFM2-VL-3B

On-Device
cloud
hybrid

Audio Model

End-to-end foundation model for audio and text generation. Designed for low latency, it enables responsive, high-quality conversations with only 1.5 billion parameters.

LFM2-Audio-1.5B

On-Device
cloud
hybrid

Nano Models

Tiny customized models for specific tasks and knowledge.

Extract

Tool

Math

ColBERT

RAG

Japanese PII Extract

Start building with LFMs today

LFMs are rapidly customizable to deliver powerful performance for your unique use cases, devices and data. Talk to sales to see how Liquid can build solutions for you, or leverage our self-service tools to customize and deploy.

Try LEAP: Our developer-first platform creates a single workflow for customization and deployment across any operating system. 

View Docs: Get started building and customizing LFMs with cookbooks, tutorials and more.

Download Models: Browse, download and build with our collections of models from Hugging Face.

Join the Discord

State-of-the-art performance at every scale.

Benchmarks
LFM2-350M
LFM2-700M
LFM2-1.2B
MMLU
(5-shot)
43.43
49.9
55.23
GPQA
(0-shot)
27.46
28.48
31.47
IFEval
65.12
72.23
74.89
IFBench
16.41
20.56
20.7
GSM8K
(0-shot)
30.1
46.4
58.3
MGSM
(5-shot)
29.52
45.36
55.04
MMMLU
(5-shot)
37.99
43.28
46.73
*We evaluated LFM2 across seven popular benchmarks covering knowledge (5-shot MMLU, 0-shot GPQA), instruction following (IFEval, IFBench), mathematics (0-shot GSM8K, 5-shot MGSM), and multilingualism (5-shot OpenAI MMMLU, 5-shot MGSM again) with seven languages (Arabic, French, German, Spanish, Japanese, Korean, and Chinese).

Learn how we designed, built and trained our LFM2s

Learn more about how we designed and trained our LFM2s including our hardware-in-the-loop architecture design, pre-training, knowledge distillation, and post training recipe.

Read the full LFM2 Technical Report

Built for efficient inference everywhere you need it.

Unmatched speed, quality and memory-efficiency on the edge, or in the cloud.

Whether deploying on smartphones, laptops, vehicles, or any other device, LFMs run efficiently on CPU, GPU, and NPU hardware. Designed for millisecond latency, on-device resilience, and data privacy, LFMs unlock the full potential of local, cloud and hybrid AI across industries.

on device

Cloud

hybrid

Fig. 1. Prefill performance on CPU in ExecuTorch
Fig. 1. Prefill performance on CPU in ExecuTorch
Fig. 2. Prefill performance on CPU in ExecuTorch
Fig. 2. Prefill performance on CPU in ExecuTorch

Achieve peak performance by finetuning LFMs for your use case.

The power of model customization directly in your hands.

LFMs are designed for rapid customization to achieve peak performance for specified use cases at a footprint small enough to run locally on your chosen hardware.

Our full-stack solution includes architecture, optimization and deployment engines to accelerate the path from prototype to product.

Latest Liquid Foundation Models

Frequently asked questions.

As an enterprise, can we purchase full local access to LFMs?
Can we finetune LFMs?
Where can I learn more about Liquid Foundation Models?
Get started with Liquid AI

Unlock your business, workflows, and engineers with Liquid AI.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required