Featured in

Why Liquid

AI that works where you need it, how you need it.

Private, accessible, scalable

Our models operate without cloud dependence, keeping your data safe.

Built for the edge

Optimized for compute-constrained environments with low memory usage and costs.

Real-time performance, no latency

Instant processing of multi-modal inputs without cloud round trips.

Our solutions

Optimized for low-power, high-efficiency, CPU-only.

Performance

Best-in-class performance, 
across all modalities.

Liquid Small Language Models outperforms leading SLMs, even those slightly larger.

Explore our models

Head to head evaluation of chat capabilities in English*

LFM-1BLFM-1B
*Based on a collection of 1,000 real conversations

Head to head evaluation of chat capabilities in English*

LFM-3BLFM-3B
*Based on a collection of 1,000 real conversations

Head to head evaluation of chat capabilities in English*

LFM-7BLFM-7B
*Based on a collection of 1,000 real conversations
Purpose-built

Built for efficient inference at the edge.

Our models are memory-optimized and tuned for real-time deployment on constrained hardware.

Request a demo
Fig.1. Total inference memory footprint of different language models vs. the input+generation length.
Fig.1. Prefill performance on CPU (AMD HX370) in ExecuTorch
Efficient

Ultra-low Time to First Token.

Liquid AI is engineered for ultra-low Time to First Token (TTFT) and high throughput (tok/sec) across all modalities.

Request a demo
On mid-range mobile

< 1 s

Time to First Token (TTFT)

On high-end mobile SoCs

< 110 ms

Time to First Token (TTFT)

Co-development

We help your team build the ideal solution.

We manage the complete model lifecycle, so your team can focus on strategic objectives rather than operational complexity.

Data generation and curation

Large-scale synthetic, labeled, or multimodal data tailored to your use case.

Model training and evaluation

Rapid development and rigorous validation aligned to your requirements.

Optimization for production

Models tuned for your specific hardware: CPUs, GPUs, automotive chips, mobile, edge.

Collaborative partnership

Full transparency to extend or fine‑tune models (e.g., through RAG or custom integration).

Proven track record

Successful deliveries to global enterprises, yielding clear improvements.

INTRODUCING LEAP

Discover Liquid’s Edge AI Platform

LEAP is our enterprise-grade platform designed for edge-native AI delivery. Deploy, distill, fine-tune, and run models directly on-prem, on-device, or at the edge — all through a fully integrated SDK. Built for speed, control, and end-to-end security.

Discover Liquid’s Edge AI Platform
Fine-tuning

Fine-tune Liquid Models on your own terms, from your own premise.

The power of model customization, directly into your hands.

Whether you're optimizing for domain accuracy, response style, or hardware footprint, FT CLI gives your team direct access to the internals of Liquid’s small, fast models — from prompt adapters to fully custom fine-tunes.

Deployment

AI that fits your hardware and your business.

Run real-time models on-device. No cloud, no delay, no GPU needed.

Our adaptive inference engines are optimized for performance, memory efficiency, and minimal latency across all environments.

Smartphone

Laptop

Automotive

Customer stories

Real results. Real customers. Powered by Liquid.

Ready to experience AI?

Built for engineers. Tuned for edge. 
Ready when you are.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required