In a multi‑million‑dollar agreement, Liquid AI teamed up with Shopify to license Liquid’s LFMs for search and a new co‑developed generative recommender system that has outperformed prior systems in controlled testing.

Key takeaways: 

  • The multi-year agreement enables Shopify to license Liquid AI’s flagship Liquid foundation models for search and other use cases across the Shopify platform. 
  • The agreement also empowers Liquid and Shopify to bring to production a new co‑developed generative recommender system that has outperformed prior systems.
  • The first production deployment is an LFM that completes a search in less than 20ms. 

Liquid AI today announced a multi‑faceted partnership with Shopify to license and deploy Liquid AI’s flagship Liquid foundation models (LFMs) across quality‑sensitive workflows on Shopify platform, including search and other multimodal use cases where quality and latency matter. The first production deployment is a sub‑20ms text model that enhances search. The agreement follows Shopify’s participation in Liquid AI’s $250 million Series A round in December 2024, and formalizes deep co‑development already underway between the companies.

As part of the partnership, Shopify and Liquid have codeveloped a generative recommender system with a novel HSTU architecture. In controlled testing, the model has proven to outperform the prior stack, resulting in higher conversion rates from recommendations.

Ramin Hasani, Liquid AI CEO:

“Recommendation is the backbone of decision‑making in finance, healthcare, and e‑commerce. To be useful in the real world, models must be reliable, efficient, and fast. Shopify has been an ideal partner to validate that at scale. We’re excited to bring Liquid foundation models to millions of shoppers and merchants and to show how efficient ML translates into measurable value in everyday experiences.”

Liquid’s LFMs are designed for sub‑20 millisecond, multimodal, quality‑preserving inference. On specific production‑like tasks, LFMs with ~50% fewer parameters have outperformed popular open‑source models such as Qwen3, Gemma3, and Llama 3, while delivering 2-10× faster inference, enabling real‑time shopping experiences at platform scale.

Mikhail Parakhin, Shopify CTO:

“I’ve seen a lot of models. No one else is delivering sub‑20ms inference on real workloads like this. Liquid’s architecture is efficient without sacrificing quality; in some use cases, a model with ~50% fewer parameters beats Alibaba Qwen, and Google Gemma, and still runs 2–10× faster. That’s what it takes to power interactive commerce at scale.”

Mathias Lechner, Liquid AI CTO:

“We design Liquid foundation models with an intertwined objective function that maximizes the quality while making the system on the hardware of choice the fastest on the market. This makes them a natural fit for applications in e-commerce, such as personalized ranking, retrieval‑augmented generation, and session‑aware recommendations, all under tight latency and cost budgets for delivering the best user experience. In Shopify’s environment, we’ve focused on production robustness, from low‑variance tail latency to safety and drift monitoring.”

The partnership includes a multi‑purpose license for LFMs across low‑latency, quality‑sensitive Shopify workloads, ongoing R&D collaboration, and a shared roadmap. While today’s deployment is a sub‑20ms text model for search, the companies are evaluating multimodal models for additional products and use cases, including customer profiles, agents, and product classification. Financial terms are not disclosed.

In a joint interview, Shopify CTO Mikhail Parakhin and Liquid AI CEO Ramin Hasani discussed the partnership and shared plans for innovation.

Learn more about Liquid Foundation Models and recent releases. 

For sales inquiries please contact the Liquid AI team here.

Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required