If you haven’t already, I’d recommend checking out @LiquidAI’s LFM2.5 model series. They’re absolute beasts for fast, local inference, especially on edge devices or in cases where speed matters most. https://huggingface.co/collections/LiquidAI/lfm25

Jan 20, 2026
ronedgecomb

Just tried LFM2 2.6 EXP model and it's mind-blowing to me. It's my go-to model for my phone with ChatterUI 😁 https://huggingface.co/LiquidAI/LFM2-2.6B-Exp

Dec 26, 2025
Samunder12or8

This is actually nuts: 41.3% on GPQA for a 2.6B model! That is PhD level knowledge running locally on your iPhone. The efficiency gains right now are insane.

Dec 25, 2025
r0ck3t23

@Prince_Canuma @liquidai LFM2.5-Audio's end-to-end processing is brilliant-especially those latency cuts! MLX-Audio integration could redefine on-device voice tech. 🚀

Jan 7, 2026
DinaAl_Jamal

@maximelabonne @huggingface Impressive specs on LFM2-8B-A1B. The MoE architecture could change the game for mobile usage. Excited to see those benchmark results!

Oct 7, 2025
VibeCodeTeddy

Liquid AI released an experimental checkpoint of LFM2-2.6B using pure RL, making it the strongest 3B on the market. "Meet the strongest 3B model on the market. LFM2-2.6B-Exp is an experimental checkpoint built on LFM2-2.6B using pure reinforcement learning. Consistent improvements in instruction following, knowledge, and math benchmarks. Outperforms other 3B models in these domains. Its IFBench score surpasses DeepSeek R1-0528, a model 263x larger"

Dec 26, 2025
KaroYadgar

@MParakhin Honestly, that’s insane sub 20ms inference with fewer parameters and higher performance? Liquid’s LFMs are setting a new bar for real-world AI deployment.

Nov 13, 2025
devmuradahmed

LFM2.5-1.2B-Thinking and Instruct lightning speed ...of the model have also a 1.6b vision model which can process images quite accurately. It was tested in cpu/gpu and flash attention with a max speed in one of our servers of 342 tokens per second. Definitely worth using and having in the repo...

Jan 23, 2026
Trilogix

The shift from bigger-is-better to efficient-is-smarter is accelerating. Liquid AI's 3B model beating much larger ones proves that pure RL training can unlock incredible efficiency gains.

Dec 26, 2025
alex_yehya

Hell yes! Currently running a 2.6B liquid foundation model on my phone. Imagine how useful this could be in any number of scenarios without network access!

Oct 2, 2025
LaurencePostrv

@kimmonismus I tried this on my phone and it's a really good model. I enjoyed using it.

Jan 7, 2026
brooksy4503

@LisaFlorentina8 Liquid AI is the future for sure

Sep 13, 2025
yijiezuimei

Two amazing companies, now more amazing with this partnership!You're going to start hearing a lot more about @LiquidAI_ (a @PWVentures investment) in the future; their novel AI architecture blows away transformers in a ton of valuable use cases.Congrats to both sides on

Nov 13, 2025
mojombo

@liquidai Incredible performance! Congratulations @liquidai !

Dec 26, 2025
stevechen

@its_maddy_a @liquidai @ZeroGPU_AI This is awesome to hear! 🚀 Liquid AI’s architecture is a game-changer for efficiency. We’d love to see how @ZeroGPU_AI pushes the limits of the LEAP SDK. Keep us posted on the progress!

Jan 8, 2026
varvapally

🚀 Running Small LMs on Mobile (and in the Browser)I’ve been testing small language models (<1B params) on a Samsung tablet (8 GB RAM). Results are solid and show how far on-device AI has come.

Sep 10, 2025
jalam1001

Trying out LFM2 350M from @LiquidAI_ and was mind-blown 🤯 The responses were very coherent. Less hallucinations compared to models of the same size. Very well done!! The best part: Q4_K_M quantization is just 230 Megabytes, wow!

Jul 14, 2025
ngxson

okay no longer trapped inside today :> i started running a shitty vibe coded agent on my jetson orin nano called laine :D (yes from SEL). it’s using LFM2 8B A1B which so far is an amazing model so far. I love it so far

Jan 27, 2026
proudmoontruther.space

The future isn't in the Cloud, it's in your pocket. We obsess over 100B parameter models, but the real revolution is happening at the micro-scale. LFM 2.5 proves architecture beats raw size. With just 1.2B parameters, it crushes Llama 3.2 1B on complex reasoning, scoring

Jan 7, 2026
xai_42

@liquidai The leap in instruction-following is impressive for a 3B model. How do you see models like this shifting enterprise or R&D applications?

Dec 25, 2025
TechAfi2023

@0xTib3rius OSS 20b is the banger in that size range. If you’re looking for a little smaller, but good utility, the LFM2 8b A1b (MOE model) is blazing fast and pretty smart for a little guy (all the LFM models are fantastic) If you need “freedom of expression” look for heretic (or heresy)

Feb 15, 2026
AxlysCustoms

@liquidai Qwen3 4B 2507 Instruct might be an unfair comparison cause it is 2x the size, but thats what I daily drive. When I compare LFM against Qwen3 1.7B or Granite, LFM absolutely crushes the others and it is not even close. LFM2.5 is the best <4B model currently out there.

Jan 6, 2026
xeophon

@ShopifyEng @LiquidAI_ Game changer for e-commerce! Sub-20ms models and a fresh recommender will definitely enhance user experience. Excited to see the results!

Nov 13, 2025
sir4K_zen

🚀 @LiquidAI_ LFM2-VL-1.6B sets new benchmarks: super-fast, edge-ready vision-language AI for real-world applications. Efficient. Scalable. Open. Explore now 👉 https://huggingface.co/LiquidAI/LFM2-VL-1.6B

Aug 21, 2025
Data_Prof_SXR

@liquidai LFM2.5 1.2 reminds me of LFM2 700M, it's just more powerful... great job, your models are really excellent.

Jan 6, 2026
corysus

It is incredible, hard to believe a 1B model can sound coherent, but you can straight up have conversations with it. Got me excited for Gemma4 and the future of smaller models

Jan 16, 2026
Cubow

Stop everything and grab this Vision Language Model fast! 📱👁️ This tiny 3B model is a breakthrough in Edge AI and Computer Vision. It’s faster than the giants and runs perfectly on mobile.  #liquidai #edgeai #computervision #tinymodel #opensource #aitechnology

Jan 3, 2026
Temomemo2020

Liquid AI just launched a vision model so light it can run on your smartphone, big leap for AI everywhere. No more cloud dependency—privacy and speed go next-level. Ready to build the future? Check the VentureBeat article. Daily tips -> @realjackhui #AIonMobile

Aug 12, 2025
realJackHui

Liquid AI released an experimental checkpoint of LFM2-2.6B using pure RL, making it the strongest 3B on the market. Meet the strongest 3B model on the market. LFM2-2.6B-Exp is an experimental checkpoint built on LFM2-2.6B using pure reinforcement learning. Consistent

Dec 27, 2025
foundersignals

@LocallyAIApp LFM2 2.6B-Exp-8bit works super fast and efficient in iphone 16 pro

Feb 1, 2026
Ealdorwolf
Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more about our Privacy Policy
  • Essential cookies required