Open-source orchestrator for Claude Code, Codex, Aider, and local models. Goal decomposition, task pipelines, cross-model review, live dashboard. Self-hosted. Nothing to replace — adopt your running agents.
Real-time dashboard with live output streaming, status detection, and task progress. This is what orchestrating 8 agents looks like.
claude-code AUTO
✳ Refactoring auth module... updated 3 files
claude-code BYPASS
✳ Cooking... adding JWT refresh token endpoint
claude-code AUTO
✳ Simmering... task-dispatcher DAG resolver
codex · spawned
Cooked for 4m 12s · 6 files changed
aider · adopted
Writing tests for auth middleware... 14/18 passing
aider · adopted
Brewed for 2m 8s · updated API reference
claude-code EDITS
✳ Reviewing PR #42... checking test coverage
aider-deepseek
Sauteed for 5m 44s · migrations complete
Stop running one agent at a time. Orchestrate an entire team.
Any CLI agent, any LLM backend. Mix and match as you like.
Anthropic
Full autonomy, file editing, bash, git
Click for details +OpenAI
Full auto mode, sandboxed execution
Click for details +Open Source
Git-aware pair programming, any LLM
Click for details +Any CLI tool
Anything that runs in tmux
Click for details +Use different models for different agents. Mix cloud + local freely.
From zero to a fleet of coding agents in minutes.
Discover existing tmux sessions or create new ones. Zero disruption to running agents.
Click to expand +Describe what you want. AI decomposes into a DAG pipeline with dependencies.
Click to expand +Auto-dispatch to idle agents, monitor progress, handle retries, manage artifacts.
Click to expand +WaveCode is free and open source. It doesn't call any LLM APIs itself — it orchestrates the CLI agents you already use. Your subscriptions, your API keys, your hardware. WaveCode just makes them work together.
Claude Code, Codex CLI, or any CLI agent running on a subscription plan. WaveCode monitors and dispatches — the billing is between you and the provider.
Use Aider or custom agents with API keys from any provider. Mix cloud and local models across your agent fleet.
Run fully offline with Ollama, LM Studio, or any OpenAI-compatible endpoint. Zero data leaves your machine.
Not a wrapper. A full orchestration platform with research, review, knowledge management, and team collaboration built in.
Define goals in plain English. AI decomposes them into sub-tasks with a dependency graph. Tasks auto-dispatch to idle agents, respecting the DAG order.
Real-time agent output via SSE. No polling. See what every agent is doing right now.
AI-powered chat that can spawn agents, dispatch tasks, and query status conversationally.
One AI reviews another's code. Self-review or cross-model review with automated fix loops.
LLM-powered completion check. Did the agent actually finish? Auto-retry on failure.
AI optimizes your prompts for each runtime. Context-aware, considering agent state.
Auto-extract architectural decisions from transcripts. Feed them back as context.
Run one-shot research jobs against any LLM provider -- Anthropic, OpenAI, Gemini, Perplexity, xAI. Get specs, analysis, or architecture docs with cost tracking. Fork and iterate.
Sync knowledge from Git repos. Attach guides to agents as context for better output.
Git-based agent templates. Spawn pre-configured agents with one click.
Group agents into teams with roles: implementer, reviewer, tester, spec-writer.
Agents auto-briefed on sibling work, recent changes, and architectural decisions.
Share files, screenshots, specs between agents. Categorized as spec, review, context, or output.
Agents notify each other on completion. Broadcast status, request help, share findings.
Your code never leaves your server. Tailscale auth, sandbox enforcement, git push restrictions. No cloud dependency.
Manage from your phone via Tailscale. Full PWA with offline support.
Web Push, ntfy.sh, and Telegram alerts for task events.
Auto-detect hung agents, auto-restart, crash recovery. Keeps your fleet running.
Built in the open. Contribute, extend, self-host. No vendor lock-in, ever.
We don't consult on AI — we build it. WaveCode is one of several production systems we ship: an orchestrator, a document intelligence platform, a finance AI, and the hardware that runs them.
The same engineers who build these products deploy them inside banks, insurers and manufacturers who cannot send a single byte to a public cloud. Maximum security. Maximum privacy. On your hardware. Under your control.
Need help deploying WaveCode at scale?
One command to install. Three commands to your first dashboard.
curl -fsSL https://wavecode.dev/install.sh | bash wavecode scan wavecode adopt cl-frontend open http://localhost:3777 Requirements: Node.js 22+ · tmux · git · Linux or macOS
Setting up a dedicated server? See the server setup guide for full instructions.