Liquid AI LFM2.5-350M: A Sub-500MB Agentic Powerhouse

Liquid AI releases LFM2.5-350M, a 350M parameter model trained on 28T tokens with RL, optimized for data extraction and agentic loops on edge devices.

by HowAIWorks Team
Liquid AILFM2.5-350MAI AgentsEdge AISmall Language ModelsMachine LearningTool CallingAgentic Loops

Liquid AI LFM2.5-350M

Introduction

On April 2, 2026, Liquid AI set a new benchmark for ultra-compact intelligence with the release of LFM2.5-350M. This model represents a significant leap in the "No Size Left Behind" philosophy, packing agent-grade capabilities into a footprint of less than 500MB. While most models of this size struggle with basic logic, the LFM2.5-350M is specifically engineered to handle data extraction and complex tool-calling sequences.

The release marks a turning point for Edge AI. By enabling sophisticated agentic loops on standard consumer hardware—from mobile phones to low-power CPUs—Liquid AI is making local, private, and fast AI agents a reality for developers and enterprise users alike.

Unrivaled Training and Efficiency

The secret to LFM2.5-350M's performance lies in its massive training scale and a refined alignment process. Despite its small parameter count, the model was trained on 28 trillion tokens, a dataset size typically reserved for models 10 to 20 times larger.

Key technical highlights include:

  • Massive Token-to-Parameter Ratio: Training on 28T tokens ensures that every parameter is highly dense in information and capability.
  • RL-First Alignment: The model underwent extensive Reinforcement Learning (RL) to sharpen its ability to follow complex instructions and maintain logic through multi-step tasks.
  • Sub-500MB Footprint: At less than half a gigabyte, it can be cached in RAM on almost any modern electronic device, minimizing latency and power consumption.

Agentic Capabilities on the Edge

What truly distinguishes the LFM2.5-350M is its proficiency in agentic workflows. Most small language models (SLMs) fail when asked to call tools or extract structured data from unstructured text. Liquid AI has tuned this model to excel in these specific areas:

  • Data Extraction: Seamlessly identifying and pulling key information from documents and messages.
  • Tool Calling: Executing function calls and interacting with APIs, a task usually gatekept by models in the 7B+ parameter range.
  • Agentic Loops: Running iterative reasoning cycles locally on CPUs, GPUs, and mobile NPUs without the need for an internet connection.

These capabilities open up new use cases for local document processing and edge-deploy scenarios where privacy and offline stability are paramount.

Conclusion

The Liquid AI LFM2.5-350M proves that intelligence isn't just about parameter count; it's about training quality and specialized alignment. As the industry moves toward more decentralized AI, models like this will become the backbone of "invisible" AI—powering smart assistants and automated workflows directly on our devices.

Whether you are building a privacy-focused personal assistant or an industrial edge-sensor network, the LFM2.5-350M offers the perfect balance of portability and power.

Sources

Frequently Asked Questions

Despite having only 350M parameters, it is trained on a massive 28T token dataset and aligned with RL, allowing it to perform data extraction and tool calling tasks that usually require much larger models.
Yes, its file size is less than 500MB, making it ideal for local deployment on smartphones, tablets, and other edge devices with limited hardware resources.
It excels at local document processing, light agentic workflows, data extraction, and running agentic loops without needing cloud connectivity.
The model is designed to work efficiently on CPUs, GPUs, and specialized mobile NPUs, ensuring broad compatibility for edge-first applications.

Continue Your AI Journey

Explore our lessons and glossary to deepen your understanding.