
Introduction
On April 2, 2026, Liquid AI set a new benchmark for ultra-compact intelligence with the release of LFM2.5-350M. This model represents a significant leap in the "No Size Left Behind" philosophy, packing agent-grade capabilities into a footprint of less than 500MB. While most models of this size struggle with basic logic, the LFM2.5-350M is specifically engineered to handle data extraction and complex tool-calling sequences.
The release marks a turning point for Edge AI. By enabling sophisticated agentic loops on standard consumer hardware—from mobile phones to low-power CPUs—Liquid AI is making local, private, and fast AI agents a reality for developers and enterprise users alike.
Unrivaled Training and Efficiency
The secret to LFM2.5-350M's performance lies in its massive training scale and a refined alignment process. Despite its small parameter count, the model was trained on 28 trillion tokens, a dataset size typically reserved for models 10 to 20 times larger.
Key technical highlights include:
- Massive Token-to-Parameter Ratio: Training on 28T tokens ensures that every parameter is highly dense in information and capability.
- RL-First Alignment: The model underwent extensive Reinforcement Learning (RL) to sharpen its ability to follow complex instructions and maintain logic through multi-step tasks.
- Sub-500MB Footprint: At less than half a gigabyte, it can be cached in RAM on almost any modern electronic device, minimizing latency and power consumption.
Agentic Capabilities on the Edge
What truly distinguishes the LFM2.5-350M is its proficiency in agentic workflows. Most small language models (SLMs) fail when asked to call tools or extract structured data from unstructured text. Liquid AI has tuned this model to excel in these specific areas:
- Data Extraction: Seamlessly identifying and pulling key information from documents and messages.
- Tool Calling: Executing function calls and interacting with APIs, a task usually gatekept by models in the 7B+ parameter range.
- Agentic Loops: Running iterative reasoning cycles locally on CPUs, GPUs, and mobile NPUs without the need for an internet connection.
These capabilities open up new use cases for local document processing and edge-deploy scenarios where privacy and offline stability are paramount.
Conclusion
The Liquid AI LFM2.5-350M proves that intelligence isn't just about parameter count; it's about training quality and specialized alignment. As the industry moves toward more decentralized AI, models like this will become the backbone of "invisible" AI—powering smart assistants and automated workflows directly on our devices.
Whether you are building a privacy-focused personal assistant or an industrial edge-sensor network, the LFM2.5-350M offers the perfect balance of portability and power.