挑戦

A global automaker wanted to bring real-time voice and vision AI to vehicles—but off-the-shelf models were too slow for mid-tier CPUs. Despite months of effort with llama.cpp, slow inference speeds and hardware limitations blocked deployment.

Key Obstacles:

  • Performance bottlenecks: Small VLMs ran too slowly on existing hardware
  • Integration hurdles: Unacceptable time-to-first-token (TTFT) for in-car UX
  • Resource constraints: Couldn’t support efficient AI inference without costly upgrades
弊社のソリューション

Liquid AI delivered a hardware-optimized VLM that ran 10x faster on the automaker’s existing CPUs. Using our Edge SDK, we reduced model size by 50% without sacrificing accuracy—and deployed a production-ready solution in just one week.

結果

The automaker achieved real-time AI interactions directly in vehicles—no hardware upgrades needed.

  • 10x faster time-to-first-token
  • 50% smaller model size (no performance loss)
  • Deployment slashed from months to 1 week
  • Enabled real-time voice/vision AI on existing hardware
AI を体験する準備はできていますか?

Liquid AI でビジネス、ワークフロー、エンジニアを強化しましょう

プリファレンスを管理

当社は、お客様のブラウジング体験を向上させ、トラフィックを分析するためにクッキーを使用しています。「すべて同意する」をクリックすると、クッキーの使用に同意したものとみなされます。

さらに詳しく
  • 必須クッキーが必要です