Anthropic's Costly Slip While OpenAI was scaling up, Anthropic was dealing with an unexpected misstep. Its coding agent, Claude Code, had its entire source code exposed via a public npm source map. Not a hack. Not a breach. Just a packaging mistake. The leak included: - Over 5,00,000 lines of code
- Internal tools and system architecture
- Multi-agent workflows
- Even hints of unreleased features
No user data or model weights were leaked, which is reassuring. Anthropic described the incident as a simple mistake. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again," an Anthropic spokesperson said in a statement. Claude Code lead Boris Cherny said their deployment process has a few manual steps, and that one of them wasn't executed correctly. Yet, the exposure gave outsiders a rare look into how a top-tier AI system is actually built. It exposed an unreleased roadmap, including KAIROS (always-on proactive assistant), ULTRAPLAN (30-minute deep planning in cloud containers), BUDDY (a gamified AI companion system), and Coordinator Mode (multi-agent orchestration). The leaked codebase was rapidly mirrored on a public GitHub repository, drawing nearly 22,000 stars within hours. The Bigger Picture OpenAI's story is one of confidence, cementing its position as one of the most valuable private companies. Anthropic's position, on the other hand, highlights fragility, showing how costly human mistakes can be. Both realities exist at the same time. Money is pouring in, and products are maturing. But the margin for error is shrinking. |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.