Open Source Apache 2.0 v0.1 Built by Wavenetic

Turn one CLI subscription into a dev team

Open-source orchestrator for Claude Code, Codex, Aider, and local models. Goal decomposition, task pipelines, cross-model review, live dashboard. Self-hosted. Nothing to replace — adopt your running agents.

wavecode -- server
5 agents active

See your agents in action

Real-time dashboard with live output streaming, status detection, and task progress. This is what orchestrating 8 agents looks like.

https://wavecode.tailnet:3777
WaveCode
5 working 3 idle
C

cl-frontend

claude-code AUTO

WORKING

Refactoring auth module... updated 3 files

UP 2h 14m cl-frontend
C

cl-backend

claude-code BYPASS

WORKING

Cooking... adding JWT refresh token endpoint

UP 1h 48m cl-backend
C

cl-waveops

claude-code AUTO

WORKING

Simmering... task-dispatcher DAG resolver

UP 3h 22m cl-waveops
X

codex-api

codex · spawned

IDLE

Cooked for 4m 12s · 6 files changed

UP 45m codex-api
A

aider-tests

aider · adopted

WORKING

Writing tests for auth middleware... 14/18 passing

UP 1h 5m aider-tests
A

aider-docs

aider · adopted

IDLE

Brewed for 2m 8s · updated API reference

UP 58m aider-docs
C

cl-review

claude-code EDITS

WORKING

Reviewing PR #42... checking test coverage

UP 32m cl-review
D

aider-deepseek

aider-deepseek

IDLE

Sauteed for 5m 44s · migrations complete

UP 1h 30m aider-deepseek
TASKS
5 running · 3 pending · 12 done
Live status detection from terminal output Accessible from phone via Tailscale SSE streaming — no polling

The difference is multiplicative

Stop running one agent at a time. Orchestrate an entire team.

Without WaveCode

  • One agent running at a time, sequential tasks
  • Manual monitoring -- alt-tab, scroll, squint
  • Chained to your laptop while agents work
  • Locked into one LLM provider
  • No coordination between tasks

With WaveCode

  • 5-10+ agents running in parallel, auto-dispatched
  • Live dashboard with real-time streaming output
  • Manage from your phone via Tailscale PWA
  • Mix any LLM -- Claude, GPT, Gemini, Grok, local
  • DAG pipelines with automatic dependency resolution

Works with your stack

Any CLI agent, any LLM backend. Mix and match as you like.

Agent Runtimes

C

Claude Code

Anthropic

Full autonomy, file editing, bash, git

Click for details +
X

Codex CLI

OpenAI

Full auto mode, sandboxed execution

Click for details +
A

Aider

Open Source

Git-aware pair programming, any LLM

Click for details +
*

Custom

Any CLI tool

Anything that runs in tmux

Click for details +

Supported LLMs

Cloud Providers
Claude Sonnet / Opus / Haiku GPT-4.1 / GPT-5.4 / o3 / o4-mini Gemini 2.5 Pro / Flash Grok Perplexity DeepSeek Qwen
Local / Self-Hosted
Ollama LM Studio vLLM llama.cpp / llama-server LocalAI Jan GPT4All text-generation-webui (oobabooga) KoboldCpp Any OpenAI-compatible endpoint

Use different models for different agents. Mix cloud + local freely.

Three steps to parallel agents

From zero to a fleet of coding agents in minutes.

1

Adopt or Spawn

Discover existing tmux sessions or create new ones. Zero disruption to running agents.

Click to expand +
2

Define Goals or Tasks

Describe what you want. AI decomposes into a DAG pipeline with dependencies.

Click to expand +
3

WaveCode Orchestrates

Auto-dispatch to idle agents, monitor progress, handle retries, manage artifacts.

Click to expand +

Bring your own everything

WaveCode is free and open source. It doesn't call any LLM APIs itself — it orchestrates the CLI agents you already use. Your subscriptions, your API keys, your hardware. WaveCode just makes them work together.

Subscriptions

Claude Code, Codex CLI, or any CLI agent running on a subscription plan. WaveCode monitors and dispatches — the billing is between you and the provider.

Claude Max ChatGPT Pro

API Keys

Use Aider or custom agents with API keys from any provider. Mix cloud and local models across your agent fleet.

Anthropic OpenAI Gemini

Local Models

Run fully offline with Ollama, LM Studio, or any OpenAI-compatible endpoint. Zero data leaves your machine.

Ollama LM Studio

Everything you need to orchestrate

Not a wrapper. A full orchestration platform with research, review, knowledge management, and team collaboration built in.

Core Orchestration

DAG Task Pipeline

Define goals in plain English. AI decomposes them into sub-tasks with a dependency graph. Tasks auto-dispatch to idle agents, respecting the DAG order.

Live Dashboard

Real-time agent output via SSE. No polling. See what every agent is doing right now.

Command Chat

AI-powered chat that can spawn agents, dispatch tasks, and query status conversationally.

Quality & Review

Cross-Model Review

One AI reviews another's code. Self-review or cross-model review with automated fix loops.

Task Verification

LLM-powered completion check. Did the agent actually finish? Auto-retry on failure.

Prompt Enhancement

AI optimizes your prompts for each runtime. Context-aware, considering agent state.

Decision Tracking

Auto-extract architectural decisions from transcripts. Feed them back as context.

Knowledge & Research

Research & Specs

Run one-shot research jobs against any LLM provider -- Anthropic, OpenAI, Gemini, Perplexity, xAI. Get specs, analysis, or architecture docs with cost tracking. Fork and iterate.

Guide Library

Sync knowledge from Git repos. Attach guides to agents as context for better output.

Templates

Git-based agent templates. Spawn pre-configured agents with one click.

Collaboration & Sharing

Teams & Roles

Group agents into teams with roles: implementer, reviewer, tester, spec-writer.

Context Briefing

Agents auto-briefed on sibling work, recent changes, and architectural decisions.

Artifact & File Sharing

Share files, screenshots, specs between agents. Categorized as spec, review, context, or output.

Agent Messaging

Agents notify each other on completion. Broadcast status, request help, share findings.

Infrastructure

Self-Hosted & Secure

Your code never leaves your server. Tailscale auth, sandbox enforcement, git push restrictions. No cloud dependency.

Mobile PWA

Manage from your phone via Tailscale. Full PWA with offline support.

Notifications

Web Push, ntfy.sh, and Telegram alerts for task events.

Health Monitor

Auto-detect hung agents, auto-restart, crash recovery. Keeps your fleet running.

Open Source

Built in the open. Contribute, extend, self-host. No vendor lock-in, ever.

Apache 2.0 Community-driven
Star on GitHub
Node.js TypeScript React 19 SQLite Tailwind CSS Hono tmux
European AI product company

Built by Wavenetic

We don't consult on AI — we build it. WaveCode is one of several production systems we ship: an orchestrator, a document intelligence platform, a finance AI, and the hardware that runs them.

The same engineers who build these products deploy them inside banks, insurers and manufacturers who cannot send a single byte to a public cloud. Maximum security. Maximum privacy. On your hardware. Under your control.

  • Our products, our engineers — no outsourcing, no black boxes, full source control
  • Private LLMs on your hardware — Claude, Llama, DeepSeek, Gemma running behind your firewall
  • Air-gapped by design — zero data egress, EU AI Act & GDPR compliant from day one
  • WaveNode turnkey hardware — we ship the box, you plug it in, your data never leaves the building

Enterprise Support

Need help deploying WaveCode at scale?

Dedicated support
Custom integrations
Team onboarding
Priority bug fixes
SLA guarantees
Private deployment consulting
Or email [email protected]

Get started in 30 seconds

One command to install. Three commands to your first dashboard.

Install
curl -fsSL https://wavecode.dev/install.sh | bash

Quick Start

1 Discover existing agents
wavecode scan
2 Start monitoring an agent
wavecode adopt cl-frontend
3 Open your dashboard
open http://localhost:3777

Requirements: Node.js 22+ · tmux · git · Linux or macOS

Setting up a dedicated server? See the server setup guide for full instructions.