Laddr

Popular

Python framework for building scalable multi-agent systems with message queues, observability, and horizontal scalability. Microservices architecture for AI agents.

AI ToolMulti-Agent SystemsFrameworkDevelopmentPythonAgentic AI
Developer
AgnetLabs
Type
Platform
Pricing
Open Source

Laddr

Laddr is a Python framework for building multi-agent systems where agents communicate, delegate tasks, and execute work in parallel. Think of it as a microservices architecture for AI agents — with built-in message queues, observability, and horizontal scalability.

Overview

Laddr provides a production-ready framework for developers who want to build sophisticated multi-agent applications without managing the underlying infrastructure. It handles agent communication, task delegation, parallel execution, and provides comprehensive observability tools out of the box.

The framework uses Redis Streams for reliable messaging, PostgreSQL for trace storage, and MinIO/S3 for artifact storage. It includes a beautiful React dashboard for monitoring and debugging agent systems in real-time.

Laddr is designed for developers who need to build scalable agent systems that can handle complex workflows, parallel processing, and distributed execution across multiple workers.

Key Features

  • Multi-Agent Communication: Agents can communicate, delegate tasks, and coordinate work seamlessly
  • Built-in Message Queues: Redis Streams-based messaging system for reliable, distributed communication
  • Horizontal Scalability: Scale agents across multiple workers automatically
  • Task Delegation: Agents can delegate tasks to other agents with timeout and error handling
  • Parallel Execution: Execute multiple tasks in parallel across different agents
  • Observability Dashboard: Beautiful React dashboard with real-time monitoring, traces, metrics, and logs
  • Automatic Tracing: All agent executions are automatically traced to PostgreSQL
  • Artifact Storage: Large payloads automatically stored in S3-compatible storage (MinIO/S3)
  • FastAPI Runtime: Built-in FastAPI server for REST API access
  • Custom System Tools: Override built-in delegation and storage tools with custom implementations
  • Docker Support: Containerized agents for easy deployment
  • Open Source: Apache 2.0 license, fully open source

How It Works

Laddr uses a microservices-like architecture for AI agents:

  1. Message Bus (Redis Streams): Each agent has a dedicated stream for receiving tasks
  2. Worker Pool: Multiple workers consume from the same stream for load balancing
  3. Task Processing: Workers process tasks, call tools, and interact with LLMs
  4. Delegation: Agents can delegate tasks to other agents via the message bus
  5. Trace Storage: All executions are automatically traced to PostgreSQL
  6. Artifact Storage: Large payloads stored in object storage (MinIO/S3)

Technical Architecture:

  • Message Queue: Redis Streams with consumer groups for load balancing
  • Database: PostgreSQL for trace storage and job tracking
  • Storage: MinIO (local) or AWS S3 (production) for artifacts
  • Runtime: FastAPI for REST API endpoints
  • Dashboard: React-based UI for observability
  • Language: Python 3.10+
  • Deployment: Docker containers for agents and services

Technical Details

Message Bus Architecture

Laddr uses Redis Streams for reliable messaging:

  • Agent Queues: Each agent has a dedicated stream (laddr:agent:{name})
  • Consumer Groups: Multiple workers consume from the same stream
  • Load Balancing: Redis automatically distributes tasks across workers
  • Persistence: Messages persisted until acknowledged
  • Backpressure: Queue depth monitoring prevents overload

Trace Storage

All agent executions are automatically traced:

  • Complete History: Every tool call, LLM interaction, delegation
  • Structured Data: JSON traces with metadata
  • Fast Queries: Indexed by job_id, agent_name, timestamp
  • No External Dependencies: Built-in, no Jaeger or DataDog needed

Artifact Storage

Large payloads automatically stored:

  • Automatic Threshold: Messages >1MB stored as artifacts
  • S3-Compatible: MinIO (local) or AWS S3 (production)
  • Efficient Messaging: Only artifact reference sent via Redis
  • On-Demand Retrieval: Workers fetch artifacts when needed

API Endpoints

Laddr provides a REST API for job management, observability, and agent interaction. Key endpoints include:

Job Management:

  • POST /api/jobs - Submit a new job
  • GET /api/jobs/{job_id} - Get job status and result
  • GET /api/jobs - List all jobs
  • POST /api/jobs/{job_id}/replay - Replay a failed job

Observability:

  • GET /api/traces - Get execution traces
  • GET /api/metrics - Get system metrics
  • GET /api/logs/containers - List Docker containers

Agent Management:

  • GET /api/agents - List all available agents
  • GET /api/agents/{agent_name}/tools - Get agent's tools
  • GET /api/agents/{agent_name}/chat - Interactive chat with agent

Refer to the official API documentation for complete endpoint reference and request/response formats.

Use Cases

Multi-Agent Workflows

  • Research Agents: Delegate research tasks to specialized agents
  • Content Generation: Coordinate multiple agents for content creation pipelines
  • Data Processing: Parallel processing across multiple agent workers
  • Task Orchestration: Complex workflows with agent coordination

Enterprise Applications

  • Customer Service: Multi-agent systems for handling customer inquiries
  • Document Processing: Parallel document analysis and extraction
  • Business Automation: Complex business processes with agent coordination
  • Data Analysis: Distributed data analysis across agent workers

Development & Testing

  • Agent Development: Build and test multi-agent systems locally
  • System Debugging: Use dashboard to trace and debug agent interactions
  • Performance Testing: Scale agents horizontally for load testing
  • Prototyping: Rapidly prototype multi-agent applications

Specialized Applications

  • AI Research: Build research systems with specialized agent teams
  • Content Moderation: Multi-agent content review and moderation
  • Financial Analysis: Parallel financial data processing
  • Healthcare: Coordinated agent systems for medical data analysis

Getting Started

Step 1: Installation

# Install Laddr
pip install laddr

# Or using uv
uv pip install laddr

Step 2: Define Your Agents

# Example agent definition (check official docs for exact syntax)
from laddr import Agent

# Define a researcher agent
@agent(name="researcher")
async def researcher_agent(query: str):
    """Research agent that searches and summarizes information."""
    # Agent logic here
    return {"answer": "Research results..."}

Refer to the official documentation for the exact API syntax and agent definition patterns.

Step 3: Start the System

# Start Laddr in development mode
# (Check official docs for exact command syntax)
laddr run dev

# This starts required services:
# - Redis (message bus)
# - PostgreSQL (trace storage)
# - MinIO (artifact storage)
# - FastAPI server (REST API)
# - React dashboard (observability)

Refer to the official documentation for exact setup commands and configuration.

Step 4: Submit Jobs

# Submit a job via API (check docs for exact endpoint and port)
curl -X POST http://localhost:8000/api/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "agent_name": "researcher",
    "inputs": {
      "query": "Latest AI trends 2025"
    }
  }'

Refer to the API documentation for exact endpoint details and request formats.

Step 5: Monitor in Dashboard

Access the dashboard (default port may vary - check documentation) to:

  • View job status and results
  • Explore execution traces
  • Monitor system metrics
  • Stream container logs
  • Test agents interactively

Limitations

  • Infrastructure Requirements: Requires Redis, PostgreSQL, and object storage
  • Python Only: Currently only supports Python agents
  • Learning Curve: Requires understanding of multi-agent concepts
  • Resource Intensive: Multiple services need to run simultaneously
  • Local Development: Dashboard and services need to be running locally
  • Deployment Complexity: Production deployment requires infrastructure setup

Alternatives

  • AgentGPT - Browser-based multi-agent system builder
  • n8n - Workflow automation with AI agents
  • Manus - AI agent platform with visual workflow builder

Community & Support

Built for production. Designed for scale. Made transparent.

Explore More AI Tools

Discover other AI applications and tools.