HOLY SPEED BATMAN. THIS THING RIPS ON MY MAC BOOK PRO.
Liquid AI drops model and we are racing to test it. #ai #model #24b #moe
Right off the rip its destroying my future use of Anthropic and OpenAI. LFG! Mathias Lechner / Ramin Hasani - Kudos. Looking so nice!
Link:
https://lnkd.in/eyve6hrN

Mar 5, 2026
Jon Salisbury

Amazed by LFM 3b model.. and amazed by the @liquidai team for making this amazing model within 3 b parameter... running quantized Q4 but already loving it... earlier i was running gemma2 but looks like LFM is my new love...

Dec 26, 2025
Crypto_neowolf

@liquidai Impressive benchmarks for LFM2.5-1.2B-Thinking, especially with on-device capabilities and concise reasoning - a significant step forward in AI efficiency.

Jan 21, 2026
RahulVerma989

In my limited experience testing various nano/tiny/small models from various labs, LFM2-1.2B was galaxies ahead of competitors. It's 1.2B but really behaves more like a ~10B model in terms of coherence and reasoning power. Even their smaller ~300M model feels more like a 4B, which is mind blowing because it runs fast even on a CPU (hell, it runs fast even on a browser via WebAssembly, unfathomably amazing). I'm hoping that these specialized models will be very good, I have a good amount of trust in Liquid AI.

Oct 1, 2025
unsolved-problems

Liquid AI's new models are changing the game! Faster, more efficient generative AI for any device. The future is here! #AI #MachineLearning #Innovation #Tech #FutureTech #LiquidAI

Jan 12, 2026
a.techai

@liquidai Whoa, a 3B model outperforming one 263× its size? This is next-level RL magic!

Dec 25, 2025
SimslearnAi

Damn you guys keep knocking it out of the park. Congratulations on all the amazing releases. Im still trying to fully utilize LFM2-1.2B then you drop new weights. So fun to be here and experience the fruits of your toil. Thanks for looking out for those of us with less computational capacities. Its amazing how far small models have come and you're an integral part of that puzzle. Best wishes

Jan 7, 2026
Foreign-Beginning-49

@maximelabonne @Lightricks Seeing Liquid AI models trending is a strong signal for non-Transformer architectures! 💧 At 1.2B params, these are perfect for experimenting with efficient local inference. Great to see diversity in the open-source leaderboard! 🚀 #AI #HuggingFace

Jan 11, 2026
YigitMertCahit

@WesRothMoney Yeah it's really nice to see that. Also open source AI is getting really good. For example, lfm2-2.6b, it's an amazing model. I really love seeing small open source models getting this much better.

Dec 29, 2025
specter0o0

@TheAhmadOsman These are big names the real benificial models are the ones that are bringing the cost of intelehence down namely the 1-10 b prams pioneerd by the likes of @liquidai and @GeminiApp with gemma with their amazing 1 b prams model which can run inside your mobile phones natively

Jan 26, 2026
adityaberry2004

@paulabartabajo_ @liquidai I love this model. It makes of Llama 3.2 back in the time. Inference speed is also insane on Ollama and Apollo. I want to try it with llama.cpp cli. Liquid AI is choosing a really promising path in AI and you are pushing the boundaries of SLM

Jan 29, 2026
biXente_Latte

Liquid AI's LFM2-2.6B-Exp got 42% in GPQA - incredible for a 2.6B param mode. This level of score would need a much larger model, but LFM2-2.6B is doing it mostly by changing the training signal, not the architecture - just by adding RL on top of the same base checkpoint.

Dec 25, 2025
rohanpaul_ai

The latest @liquidai's LFM2.5 1.2B model is really impressive, my favorite 1B model right now. It’s relatively smart, lightweight and super fast running on iPhone with MLX. Go try it if you haven’t!

Jan 10, 2026
adrgrondin

Wow! @liquidai just released a 2.6B model that beats every 3B on almost all benchmarks! LFMs are becoming the best edge-device AI models! Though not surprised, amazing team!

Dec 26, 2025
DeryaTR_

@kaiapocalypse But it’s incredible that in public benchmarks, LFM2.5 1.2B instruc beats the 8B-A1B MoE in many metrics like MMLU/Pro, GPQA, instruct following and more. 🤌

Feb 5, 2026
sorbusCobPhiil

450M vision model runs at 353 tokens/sec on MLX using just 1.456GB memory 🔥. LiquidAI drops LFM2-VL-450M → MLX-VLM community with @Prince_Canuma ports it → Your Mac becomes a vision AI beast. Liquid neurons adapting computation on the fly. 450M params punching way above its weight class. Open source wins again. Model drops, community delivers, boom - it's local. What are you building with instant vision AI?

Aug 18, 2025
DittmannAxel

Built on the LFM2 architecture, this 3B parameter model uses transformers and is distributed as safetensors. It's designed to be conversational and efficient, trained for English vision-language tasks. The 'edge' tag hints at its optimized footprint.

Feb 17, 2026
HuggingModels

@songdng @liquidai Just tried it peak memory usage of barely around 2.5Gb for a thinking model is crazy good

Jan 21, 2026
birdman1710

Great leap in on-device AI: Liquid's new LFM2.5-1.2B-Thinking does genuine step-by-step reasoning using just ~900 MB RAM, runs on basically any modern phone. Beats larger models like Qwen3-1.7B on math/tool use while being dramatically faster & leaner. Privacy + zero-latency

Jan 21, 2026
hishamkhdair

Holy moly, Liquid AI just unveiled LFM2.5, a powerful open-weight model family designed to run fast, private, and always-on directly on devices. LFM2.5 sets a new bar for edge AI across text, vision, audio, and Japanese-language use cases.The TTS examples are amazing btw

Jan 6, 2026
kimmonismus

Liquid AI LFM 2.5 1.2B Thinking Model Outperforms Larger Models on Reasoning Benchmarks
Liquid AI just dropped the LFM-2.5-1.2B-Thinking, a tiny 1.2 billion parameter model designed for fast, private reasoning right on your device. It runs completely offline and uses less than 900MB of memory, perfect for phones and edge devices without any drop in capability. What sets it apart is its clean, straight-to-the-point reasoning traces, blazing-fast inference, and excellent performance on instruction following, tool use, and math problems.Benchmarks show it beating much bigger models like Qwen3-17B in thinking mode on tests such as GPQA Diamond (37.86%), MMLU-Pro (49.65%), and....

Jan 20, 2026
techspecsmart

Okay, this is actually insane... You can now run LFM2.5-1.2B-Thinking (a 1.2B parameter LLM from Liquid AI) at over 200 tokens per second directly in your browser on WebGPU! 🤯 Zero install. Fully private. Blazingly fast. Powered by Transformers.js and ONNX Runtime Web

Mar 5, 2026
Joshua Lochner

LiquidAI is unleashing new real-time translation power! 🤯 Introducing LFM2-350M, a model fine-tuned for Japanese-English communication! This is a total game-changer for breaking language barriers! 🚀✨

Sep 8, 2025
PriyaonAI

Just ran lfm 2.5 from @liquidai locally on my Mac. This was my first experience in running a model locally. It was a great experience with Lm studio and amazing speed and performance by the model. Loved it

Feb 13, 2026
prithvii_J

LFM2 GGUF models just dropped for llama.cpp - these are game changers. 2x faster than Qwen3 on CPU, 200% higher throughput vs competitors, designed for edge deployment. Finally, high-performance models that actually run well locally: https://huggingface.co/LiquidAI/LFM2-1.2B-GGUF

Jul 13, 2025
MikeDevPro

Game-changing on-device AI just dropped. 🤯Liquid AI unleashed LFM2-Audio, a 1.5B param speech model with a crazy low 95ms latency—that's literally the speed of a blink. This is huge. Try it here: https://playground.liquid.ai/login?callbackUrl=%2FtalkBut
that's not all... 🧵

Oct 1, 2025
BearleDev78579

LiquidAI LFM2.5-1.2B Review: The best free model for high-speed utilityI’ve been hunting for a model that doesn't feel like a sluggish Transformer for high-frequency, low-latency tasks. I finally spent a few days with LiquidAI’s LFM2.5-1.2B-Instruct, and honestly, the performance profile of this Liquid Neural Network (LNN) architecture is a game changer for edge-style utility. The Use Case I set up a real-time monitor for a cluster of web servers...

Jan 29, 2026
IulianHI

@LiquidAI_ Congratulations on the launch of LFM2-8B-A1B! Achieving performance comparable to larger models while significantly enhancing inference speed is a remarkable feat. The efficiency of on-device MoE is crucial for real-time applications. I'm excited to see how this innovation will

Oct 7, 2025
GoldIRAChannel

🤯 🎁 Liquid AI brings "Deep Thinking" to a 2.6B model / your Phone / your robot 🤖Liquid AI released a holiday present for your new robot 🎁 They just cracked the code on efficient intelligence. With LFM 2-2.6B-Exp, the era of bloated, memory-hogging Transformers running on

Dec 25, 2025
TeksEdge

🚨 GAME CHANGER ALERT: MIT just broke the AI game! 🤯 Liquid AI's new models are DESTROYING traditional LLMs and here's why this is HUGE 👇✨ Uses 90% FEWER neurons but performs BETTER 🚀 Handles 1 MILLION tokens with minimal memory 💡 Perfect for edge devices (your phone could run this!) 🔥 LFM-1B is setting NEW records on every benchmark. While everyone's obsessing over bigger models, Liquid AI said "hold my beer" and went LIQUID 💧 This isn't just an upgrade - it's a complete paradigm shift. Traditional transformers are about to look like flip phones 📱➡️🧠The future of AI just got a lot more interesting... and efficient! 🌟 What do you think? Are liquid neural networks the future? Drop your thoughts below! 👇#LiquidAI #AI #MachineLearning #TechNews #Innovation #MIT #NeuralNetworks #ArtificialIntelligence #TechBreakthrough #FutureOfAI

Oct 1, 2025
@kryptorina
Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more about our Privacy Policy
  • Essential cookies required