Claude Code Creator Reveals Parallel AI Workflow, Sparking Developer Frenzy
SAN FRANCISCO — Boris Cherny, head of Claude Code at Anthropic, has shared his personal development workflow on X, prompting widespread excitement across the engineering community. The disclosure details a system for running multiple Claude instances in parallel, using shared memory files and automated commands to dramatically increase individual developer output.
The thread, which has been widely analyzed and praised this week, describes a non-linear approach that treats coding like commanding units in a real-time strategy game. Industry observers say the techniques could represent a significant shift in how software is built, with some comparing Anthropic’s momentum to OpenAI’s earlier breakthroughs.
Parallel Agents and Terminal Orchestration
Cherny revealed he runs five Claude instances simultaneously in his terminal using iTerm2 system notifications to manage parallel work streams. While one agent executes test suites, another can refactor legacy code and a third can draft documentation. He also maintains 5-10 additional Claude sessions on claude.ai, using a “teleport” command to seamlessly transfer work between browser and local environments.
This setup departs from the traditional linear “inner loop” of writing, testing, and iterating on individual functions. Instead, Cherny functions as a fleet commander overseeing multiple autonomous agents. One developer who adopted the approach described the experience as feeling “more like Starcraft” than conventional coding.
The workflow aligns with comments made earlier this week by Anthropic President Daniela Amodei about achieving greater productivity through better orchestration rather than solely scaling infrastructure. It stands in contrast to competitors like OpenAI, which have emphasized massive compute investments.
Preference for Largest Model and Shared Memory Solution
In a notable departure from industry focus on low-latency models, Cherny exclusively uses Anthropic’s largest and slowest model, Opus 4.5, with extended thinking capabilities. “It’s the best coding model I’ve ever used,” he wrote, explaining that although it is bigger and slower than Sonnet, superior tool use and reduced need for steering ultimately make it faster overall by minimizing human correction time.
To combat “AI amnesia” — the tendency of large language models to forget project-specific conventions across sessions — Cherny’s team maintains a single CLAUDE.md file in their git repository. Any observed mistakes or deviations are added to this file, turning each error into a permanent instruction for future interactions. This creates a self-improving system where the longer the team works with the agent, the more aligned it becomes with their standards and architecture.
Automation Through Slash Commands and Subagents
The workflow relies heavily on custom slash commands stored in the project repository. Cherny highlighted /commit-push-pr, which automates the entire process of committing changes, writing messages, pushing to remote repositories, and opening pull requests — a command he invokes dozens of times daily.
He also deploys specialized subagents for different development phases, including a code-simplifier that refines architecture post-implementation and a verify-app agent that runs end-to-end tests before code is merged. These tools automate repetitive tasks and enforce verification loops, ensuring higher code quality with less manual oversight.
Industry Reaction and Competitive Context
The response on X has been overwhelmingly positive. Jeff Tang, a prominent developer community voice, stated that failing to study Cherny’s Claude Code best practices leaves programmers “behind.” Kyle McNease described Anthropic as “on fire” and suggested the company may be approaching “their ChatGPT moment.”
VentureBeat and other outlets have covered the thread extensively, noting its potential to reshape software development practices. The techniques emphasize quality over speed and orchestration over raw scale, offering a different philosophy from some competitors’ approaches.
Impact on Developers and Teams
For individual developers, the workflow demonstrates how one person can achieve output comparable to a small engineering team through effective AI coordination. The combination of parallel agents, persistent memory via CLAUDE.md, and automated commands reduces context-switching costs and administrative overhead.
Enterprise technology leaders may find Cherny’s preference for the more capable Opus 4.5 model particularly instructive. The insight that upfront compute investment can reduce downstream human correction time challenges assumptions about optimizing solely for latency.
Development teams adopting similar practices could benefit from more consistent code quality and institutional knowledge capture through shared instruction files. The approach also suggests a path toward more scalable AI-assisted development without requiring proportionally larger teams.
What’s Next
While Cherny’s disclosure focuses on his personal and team practices rather than official product announcements, it provides a window into advanced usage patterns for Claude Code. Anthropic has not yet detailed plans to productize these specific orchestration techniques or subagent frameworks.
Developers interested in replicating the setup will need access to Claude Code and compatible terminal tools like iTerm2. The viral interest suggests increased experimentation with parallel agent workflows across the industry in coming months.
As AI coding tools mature, the emphasis on verification loops, shared memory, and multi-agent coordination demonstrated in Cherny’s thread may influence both product design and development methodology. Further details on official Anthropic guidance or new features inspired by these practices have not yet been announced.
(Word count: 812)

