EXPOSED: 512,000 Lines of Claude Code Leaked (What's Hidden Inside?)
On the morning of March 31, 2026, Anthropic accidentally published the full source code of Claude Code — its flagship AI developer tool — to the public npm registry. The cause was a single misconfigured build artifact. The consequences will be significantly harder to contain.
Version 2.1.88 of the @anthropic-ai/claude-code npm package shipped with a 59.8 MB JavaScript source map file called cli.js.map. Source maps are debugging files that link minified production code back to the original source. They are generated automatically by build toolchains and are meant to stay private. A single misconfigured line in the package ignore settings let this one go out with the release. The file pointed directly to a publicly accessible ZIP archive sitting in Anthropic's own Cloudflare R2 storage bucket. Nobody had to hack anything. The source was just there.
By 4:23 a.m. ET, Chaofan Shou, a security researcher and intern at blockchain security firm Fuzzland, spotted the file and posted the direct bucket link on X. His post reached over 3.1 million views. Within hours, the roughly 512,000-line TypeScript codebase — spanning nearly 1,900 files — had been mirrored across GitHub, forked more than 41,500 times, and archived on decentralized platforms specifically designed to resist DMCA takedowns. A Gitlawb mirror carried a single message: "Will never be taken down."
Anthropic's response
Anthropic pulled the npm package and issued DMCA takedowns to centralized repositories. A spokesperson confirmed the incident to multiple outlets: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
No customer data was exposed. No API keys or credentials were in the leaked files. The damage is entirely intellectual property — and it is, by every practical measure, permanent. DMCA takedowns work against centralized platforms like GitHub. Decentralized infrastructure, torrents, and mirrors that were live within hours of the disclosure are outside the reach of copyright enforcement.
Notably, this is Anthropic's second accidental source exposure in a week. Five days earlier, on March 26, a CMS misconfiguration exposed nearly 3,000 internal files including draft blog posts announcing the unreleased Claude Mythos model. And this is the second npm source map leak in roughly 13 months — a nearly identical incident involving an earlier Claude Code version occurred in February 2025. Developers are beginning to ask whether Anthropic's release pipeline has a systemic problem, not an isolated one.
What developers found inside: KAIROS
The most significant discovery in the leaked source is KAIROS — referenced more than 150 times throughout the codebase. The name draws from the ancient Greek concept meaning "at the right moment," and the feature lives up to it.
KAIROS is an autonomous background daemon mode. When active, Claude Code operates as an always-on persistent agent: watching files, logging events, and running a process called autoDream during idle time. The autoDream system performs memory consolidation while the user is away — merging disparate observations, resolving logical contradictions in its understanding of a project, and converting vague insights into concrete, reliable facts. When the user returns, the agent's context has already been tidied and prepared. Midnight boundary handling is explicitly coded to prevent the dream process from breaking across calendar days.
KAIROS also changes how the agent communicates. When active, it switches to a "Brief" output mode — extremely concise responses designed for a persistent background assistant that should not flood the terminal. The philosophy encoded in the comments: the difference between a chatty friend and a professional assistant who only speaks when they have something valuable to say.
What developers found inside: ULTRAPLAN, COORDINATOR, and VOICE
KAIROS was not the only unreleased feature sitting behind compile-time flags. The leaked code contains 44 feature flags in total covering functionality that has not been announced publicly.
ULTRAPLAN is a remote planning mode in which Claude Code offloads a complex task to a Cloud Container Runtime session running Opus 4.6, gives it up to 30 minutes to reason through the problem, and returns the result for user approval through a browser interface. A special sentinel value called __ULTRAPLAN_TELEPORT_LOCAL__ handles returning the result to the local terminal once approved.
COORDINATOR MODE enables multi-agent orchestration: a single Claude Code instance spawns and manages multiple parallel worker agents, each handling separate subtasks simultaneously. VOICE_MODE adds a push-to-talk voice interface. BRIDGE_MODE and PROACTIVE MODE appear in the flag list alongside several others that developers are still analyzing.
What developers found inside: BUDDY
Perhaps the most unexpected discovery: BUDDY, a Tamagotchi-style terminal companion. Claude Code has a fully built virtual pet system — 18 species including duck, dragon, axolotl, capybara, mushroom, and ghost, with rarity tiers from common to legendary and a 1% shiny variant. Each buddy carries five stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK. Claude generates a name and personality on first hatch, complete with ASCII sprite animations and a floating heart effect.
The buddy species is determined by a Mulberry32 pseudo-random number generator seeded from the user's ID hash — meaning the same user always gets the same species. The species names were obfuscated via String.fromCharCode() arrays in the source, clearly intended never to be read externally. Leaked internal notes indicate a planned teaser rollout for April 1–7, 2026, going live more broadly in May, starting with Anthropic employees.
What developers found inside: telemetry, anti-distillation, and undercover mode
The leak exposed three categories of functionality that sparked the most debate in developer communities.
Telemetry: Claude Code scans prompts for profanity as a frustration signal. It does not log full user conversations or code — but the profanity detection is explicitly intended to surface when users are hitting walls with the tool, presumably to inform product improvements. The combination of behavioral monitoring and productivity tooling sitting on developer machines is the kind of detail that security-conscious engineering teams will want to know about.
Anti-distillation: The source contains mechanisms designed to prevent competitors from using Claude Code's outputs to train rival models. Fake tool injections are inserted into outputs under specific conditions — when the ANTI_DISTILLATION_CC compile-time flag is active, the CLI entrypoint is in use, a first-party API provider is connected, and a GrowthBook feature flag returns true. A MITM proxy stripping the anti_distillation field from request bodies would bypass it entirely. The mechanism is real but narrowly targeted.
Undercover Mode: Explicit instructions in the code direct the agent to scrub all traces of its AI origins from git commit messages when operating in open-source repositories. Internal Anthropic model names and attributions are removed from public logs automatically. The intent appears to be keeping AI-generated commits clean and unattributable — but the feature's existence in a leaked codebase has triggered predictable commentary about transparency.
The concurrent axios supply chain attack
Separate from the source leak but critically relevant to any developer who updated Claude Code on March 31: a supply chain attack on the widely-used axios npm package was active between 00:21 and 03:29 UTC on the same morning. Malicious versions 1.14.1 and 0.30.4 of axios contained a Remote Access Trojan delivered via a dependency called plain-crypto-js.
Developers who installed or updated Claude Code via npm during that specific window may have pulled in the compromised axios version. If your project lockfile (package-lock.json, yarn.lock, or bun.lockb) contains axios 1.14.1 or 0.30.4, or the dependency plain-crypto-js, treat the machine as potentially compromised. Rotate all credentials, API keys, and SSH keys, and consider a clean system reinstall.
What to do right now
Anthropic has designated its native installer as the recommended installation method going forward: curl -fsSL https://claude.ai/install.sh | bash. The native binary does not rely on the npm dependency chain and supports background auto-updates. Developers still on npm should uninstall version 2.1.88 and pin to 2.1.86 or wait for a verified safe release (2.1.89 or higher). Rotate your Anthropic API keys through the developer console and monitor your usage logs for anomalies, particularly if you run Claude Code inside freshly cloned or untrusted repositories.
The bigger picture
The code cannot be put back. What Anthropic built inside Claude Code turns out to be far more sophisticated than its public presentation suggested — a multi-threaded, multi-agent system with autonomous background operation, remote planning, memory consolidation, and a product roadmap significantly ahead of what has been released publicly. Competitors can now study the architecture in detail. The strategic surprise of KAIROS, ULTRAPLAN, and coordinator mode is gone.
One analysis by developer Gergely Orosz pointed out the clean-room rewrite problem immediately: a Python reimplementation of Claude Code's functionality inspired by — but not directly copying — the leaked TypeScript is a new creative work, not a copyright violation, and cannot be taken down. Multiple such rewrites were already in progress by the end of March 31.
There is also the question of whether Anthropic holds valid copyright over code that its own CEO has said is 90% written by Claude itself. The legal standing of AI-generated work remains unsettled. If significant chunks of Claude Code were written by Claude, the copyright claim over the leaked source gets considerably murkier.
For users of AI-powered development tools, the leak's most durable message is not about Anthropic specifically. It is about the category. KAIROS is not a research prototype — it is compiled, feature-gated code sitting in a production codebase, waiting to be turned on. The question for every developer granting these tools access to their machines and codebases is whether they trust the companies deploying them to manage the security of their own infrastructure. On March 31, 2026, Anthropic provided a concrete data point on that question.