A GitHub repo hit 19,754 stars in what looks like a single week. "oh-my-claudecode" — teams-first multi-agent orchestration for Claude Code. One command. Three agents. They plan, they argue, they build, they verify, they fix. No human in the loop.

The Stack

oh-my-claudecode (OMC) is an open-source orchestration layer on top of Claude Code (and Codex CLI). Install it via the Claude Code marketplace or npm. Setup is two commands. After that, you talk to a team, not a single agent.

The pipeline runs in stages:

  • team-plan — agents analyze the ask, flag ambiguities, scope the work
  • team-prd — they write the spec together, arguing if they disagree
  • team-exec — parallel execution across the codebase
  • team-verify — they test each other's output
  • team-fix — loop until verify passes

That's a full SDLC in five stages, run entirely by agents.

What Makes It Different

Most agent orchestration is fake. It's one agent calling sub-agents as tools — still one intelligence making all the calls. OMC is different. Each agent in the pipeline is an independent Claude Code instance running in its own tmux pane. They see each other's output. They argue. The /deep-interview skill uses Socratic questioning before a line of code gets written — exposing hidden assumptions, forcing clarity.

That's the part that matters for Zero-Human Companies. Socratic questioning is usually the human's job — the founder who catches vague requirements before they become six-week rewrites. OMC automates that too.

The latest version (v4.4.0) removed Codex and Gemini MCP servers and replaced them with a CLI-first Team runtime. You spawn real tmux workers:

omc team 2:codex "review auth module for security issues"
omc team 2:gemini "redesign UI for accessibility"
omc team 1:claude "implement the payment flow"
omc team status auth-review

Each worker is a real Claude Code instance with real context. You manage the team, not individual agent calls.

What You Actually Get

The simplest interface is /autopilot:

/autopilot: build a REST API for managing tasks

That's it. The team handles scoping, decomposition, implementation, testing, and verification. You read the output. If you trust the pipeline enough, you don't even do that.

For less trusted asks — architectural decisions, unfamiliar domains, anything where you want to verify the thinking before the building — /deep-interview forces a full clarification loop first. It exposes what's underspecified, measures clarity across weighted dimensions, and won't proceed until the team agrees on what it's building.

There's also oh-my-codex — same orchestration for OpenAI's Codex CLI. So if you're model-agnostic or running a mixed stack, both code agents get the same orchestration layer.

The ZHC Angle

Here's what this actually means for Zero-Human Companies. Current state of agentic dev:

  • Human writes the spec (often badly)
  • Human kicks off the agent
  • Human reviews the output
  • Human iterates

OMC removes steps two through four for software build tasks. The human is still there — writing the initial ask, deciding whether to trust the output. But the iteration loop is agent-native. Verification is agent-native. Even the spec clarification is agent-native via deep-interview.

For a ZHC that needs to ship continuously — this is the CI/CD pipeline that doesn't require a human in the review queue. Deploy to staging, let the agent team verify, merge to production. No standup. No PR review bottleneck. No "can someone review this?"

The economics are stark: one human coordinating three agents versus three humans reviewing one agent's output. OMC is a 3x lever on agentic dev velocity at the coordination layer. That's before you factor in that the agents are running continuously, not during business hours.

What It Can't Do Yet

Don't oversell this. OMC orchestrates Claude Code — which means it's a coding agent. It handles software build tasks well. It doesn't handle:

  • Business logic decisions (still needs a human to define what's being built and why)
  • Multi-domain coordination (content, finance, ops are out of scope)
  • Financial transactions or payment flows
  • Long-running autonomous loops beyond the build pipeline

Think of it as the engineering arm of a ZHC, not the whole organism. The coordination layer — which ZHC does what, when, and why — still needs something like OpenClaw or Hermes Agent. OMC is the specialist that handles the code. The operator handles everything else.

The Take

19,754 stars in what looks like a week. This isn't hype — it's a signal. Developers are reaching for multi-agent orchestration as the natural next step from single-agent coding. The question for Zero-Human Companies is whether they're using this level of tooling or still manually coordinating a single agent.

The gap between "I use Claude Code" and "I run an OMC team that ships software continuously" is enormous. That gap is where ZHCs that understand tooling win.

Status: Adding to ZHC builder stack evaluation. Watching for production deployment patterns, especially around the team pipeline for continuous integration use cases.

Links: oh-my-claudecode on GitHub | Documentation | oh-my-codex for OpenAI Codex