A model for every scale, every deployment.

Not sure which model could work for you? We’re here to help.

Contact us

Scalable, high-performance AI across all environments.

Model
LFM-1.3B
LFM-3B
LFM-7B
LFM-40B
Purpose
Purpose

Ideal for embedded devices.

The smallest language foundation model to be fine-tuned for narrow use-cases.

Ideal for smartphones.

A mid-sized language foundation model able to be fine-tuned for narrow use-cases.

Ideal for laptops.

Our largest language foundation model to be fine-tuned for narrow use-cases.

Ideal for targeting GPUs 
for cloud and on-prem.

A powerful language foundation model built to deliver top-tier zero-shot accuracy and unparalleled performance.
Fine-tuning CLI Support
Edge Deployment Stack
not meant to be finetuned
Edge Deployment Stack
CPU
On-prem Deployment Stack
CPU
not meant to be deployed on CPU
On-prem Deployment Stack
GPU
Fine-tuning CLI Support
GPU
Languages
in base model
Languages
in base model
English
English, Spanish, German, French, Arabic, Japanese, Korean, Chinese
English, Spanish, German, French, Arabic, Japanese, Korean, Chinese, Brazilian Portuguese
English, Spanish, German, French, Arabic, Japanese, Korean, Chinese, Brazilian Portuguese

Models that excel across category sizes, unlocking new uses.

Benchmarks

Our new generation of generative AI models achieves state-of-the-art performance at every scale.

Models
LFM-1.3B
LFM-3B
LFM-7B
LFM-40B
Context length
(tokens)
32k
32k
32k
32k
MMLU
(5-shot)
58.55
66.16
69.34
78.76
MMLU-Pro
(5-shot)
30.65
38.41
42.42
55.63
Hellaswag
(10-shot)
67.28
71.31
83.07
82.07
ARC-C
(25-shot)
54.95
57.94
70.56
67.24
GSM8K
(5-shot)
55.34
44.28
76.04
Fig.1. Performance of LLMs across automated benchmarks.
Performance

The best performance-to-size tradeoff across all categories.

Fig.2.LFMs offer a new best performance/size tradeoff in the 1B, 3B, and 12B (active parameters) categories.
Fig.2.LFMs offer a new best performance/size tradeoff in the 1B, 3B, and 12B (active parameters) categories.
Memory efficiency

Liquid models are over 3x more memory efficient than leading providers as input increases.

Fig.3.Total inference memory footprint of different language models vs. the input+generation length.
Fig.3.Total inference memory footprint of different language models vs. the input+generation length.

Frequently asked questions.

As an enterprise, can we purchase full local access to LFMs?
Can we fine-tune LFMs?
Where can I learn more about Liquid Foundation Models?
Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required