Introduction
LangChain announced an update to its long-running agents harness with DeepAgents v0.2, doubling down on infrastructure for complex, multi-step agent tasks over extended periods. The release focuses on pluggable storage backends, handling large tool outputs, smarter context management, and safer recovery flows (blog). DeepAgents complements the LangChain framework and the LangGraph runtime to enable long-running, tool-heavy agents.
What’s new in DeepAgents v0.2
-
Plugin backends
- New
Backendabstraction enables different storage layers: local FS, LangGraph state–backed virtual FS, and more.
- New
-
Large tool outputs offloading
- Automatically offloads/saves oversized tool outputs to the filesystem when token/size limits are exceeded.
-
Conversation history summarization
- Compresses older interaction history as context grows to keep prompts efficient and costs predictable.
-
Interrupted tool call recovery
- Repairs message history when tool calls are interrupted/cancelled before completion, preserving process integrity.
-
Positioning in the stack
- DeepAgents is an agent "harness" with built-in planning, FS tools, and sub-agents; complementary to
langchain(framework) andlanggraph(runtime).
- DeepAgents is an agent "harness" with built-in planning, FS tools, and sub-agents; complementary to
Why it matters
These capabilities make it easier to build robust, autonomous agents that run for longer, manage bigger intermediate artifacts, and maintain coherent context without blowing token budgets—key for production-grade agentic systems. For example, a long-running research agent synthesizing many sources over hours, or an ETL-style workflow generating large intermediate artifacts, benefits from offloading outputs and summarizing history while keeping the process resilient.
Availability
DeepAgents v0.2 is available now. See the official announcement and details in the LangChain blog post (blog).
Sources
Explore more AI development tools in our AI Tools catalog and model updates in Models.