ZeroClaw

Tool

The lightweight mobile companion app for the OpenClaw agent framework, enabling on-device AI interactions with your self-hosted OpenClaw instance via smartphone or tablet.

ZeroClawAI AgentMobile AIEdge AIOpen SourceOpenClawPrivacyLatest
Developer
OpenClaw Community (Open Source)
Type
Mobile Application & Edge Framework
Pricing
Free & Open Source

ZeroClaw

ZeroClaw is the mobile companion app for the OpenClaw ecosystem — a lightweight application that connects your smartphone to your self-hosted OpenClaw agent, giving you on-the-go access to your private AI without routing queries through commercial cloud services.

Overview

Once you have an OpenClaw instance running on your home server, ZeroClaw solves the "last mile" problem: how do you access it conveniently from your phone? ZeroClaw provides a polished mobile interface that communicates with your OpenClaw backend over an encrypted tunnel (WireGuard or Tailscale), making your private agent available anywhere with internet connectivity.

ZeroClaw also includes a standalone "local mode" that runs small on-device models (via on-device inference optimized for mobile NPUs) for quick, fully offline tasks when your OpenClaw server is unavailable.

Note: ZeroClaw is the mobile component of the open-source OpenClaw ecosystem. If you haven't set up OpenClaw yet, start there.

Key Features

  • OpenClaw Sync: Seamlessly connects to your self-hosted OpenClaw instance, giving mobile access to all your installed skills, conversation history, and customizations.
  • Encrypted Tunnel: Communication between ZeroClaw and your OpenClaw server uses end-to-end encryption (WireGuard/Tailscale), so your AI interactions are never exposed to third parties in transit.
  • Local Inference Mode: For small, quick tasks, ZeroClaw can run compact models directly on your phone's NPU (Neural Processing Unit) — no internet or OpenClaw server required.
  • Voice Interface: Tap-to-talk voice mode using on-device speech recognition, with responses read aloud via on-device TTS.
  • Offline Fallback: If your OpenClaw server is unreachable, ZeroClaw automatically switches to local inference mode with reduced capability.
  • Push Notifications: Receive alerts when scheduled OpenClaw tasks complete — no need to keep the app open.
  • Conversation History: Full history of all conversations synced from your OpenClaw server.

How It Works

Connection Architecture:

[Your Phone] → [ZeroClaw App]
    ↕ (WireGuard / Tailscale VPN)
[Your Home/VPS Server] → [OpenClaw Instance]
    ↕ (Ollama / Cloud API)
[AI Model]

Technical Architecture:

  • Mobile Platform: iOS (16+) and Android (11+).
  • Server Communication: WebSocket over encrypted VPN tunnel.
  • Local Inference: Optimized for mobile NPUs using quantized models (< 2GB RAM footprint).
  • On-Device Models: llama-3.2-1B, phi-3-mini optimized for mobile NPU.
  • License: MIT (open source).

Use Cases

On-the-Go Agent Access

  • Query your OpenClaw agent for quick information lookups while away from your computer.
  • Trigger skills remotely: "Check my home assistant and tell me if the garage door is open."
  • Receive and respond to OpenClaw alerts and task completions on your phone.

Fully Private Mobile AI

  • Users who don't trust commercial mobile AI assistants (Siri, Google Assistant) with their data.
  • Healthcare professionals, lawyers, and executives who need AI assistance with sensitive information.

Offline AI on the Go

  • Quick, private AI queries when on a plane or in areas with poor connectivity.
  • On-device document analysis using your phone's camera (point at a document and ask questions about it).

Getting Started

Prerequisites

  • A working OpenClaw instance on your server.
  • A VPN solution for secure remote access: Tailscale (easiest) or WireGuard.

Step 1: Set Up Secure Remote Access

# On your OpenClaw server, install Tailscale (recommended for ease):
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

# Note your Tailscale IP (e.g., 100.64.0.1):
tailscale ip -4

Step 2: Install ZeroClaw on Your Phone

  1. Download ZeroClaw from the App Store or Google Play.
  2. Open the app.
  3. Tap "Connect to OpenClaw Server".

Step 3: Connect to Your OpenClaw Instance

  1. Enter your OpenClaw server's Tailscale IP and port (default: 100.64.0.1:8080).
  2. Enter your OpenClaw API key (found in OpenClaw dashboard → Settings → API Keys).
  3. Tap "Connect" — ZeroClaw will verify the connection and sync your settings.

Step 4: Test Voice Mode

  1. Tap the microphone button in ZeroClaw.
  2. Say a command: "What's on my calendar today?"
  3. ZeroClaw will route the query to your OpenClaw agent (which will use its Calendar skill) and read the response aloud.

Step 5: Enable Local Inference (Optional)

  1. Go to ZeroClaw Settings → "Local Models".
  2. Download a small on-device model (Phi-3 mini, ~1.8GB).
  3. Toggle "Use Local Model when Server Unavailable" to ON.
  4. Test by turning off Tailscale — ZeroClaw should still respond using the local model.

Best Practices

  • Use Tailscale for remote access — it's the simplest zero-configuration VPN and the free tier supports up to 100 devices.
  • Download the local model even if you primarily use OpenClaw — it's invaluable when your server is unreachable.
  • Enable push notifications for long-running OpenClaw tasks so you don't have to keep checking the app.

Pricing & Access

  • ZeroClaw App: Free to download (App Store and Google Play).
  • Core Features: All free — no subscription required.
  • OpenClaw Server: Required (also free and open-source).
  • ClawHub Pro (~$5/month): Optional subscription for managed cloud sync — useful if you don't run a permanent home server.

Limitations

  • Requires OpenClaw Server: ZeroClaw's full capabilities depend on having a working OpenClaw instance. The local-only mode is useful but limited.
  • Technical Setup: Configuring a VPN for secure remote access requires some technical knowledge.
  • Local Model Capability: On-device models (1-3B parameters) are significantly less capable than full OpenClaw with a cloud API or large local model.
  • Battery Impact: Running local inference on-device consumes significant battery — keep sessions short when using local mode.

Community & Support

For the full-scale desktop and server version, see the OpenClaw page.

Related Tools

Explore More AI Tools

Discover other AI applications and tools.