AI company Anthropic has inadvertently revealed details of an upcoming model release, an exclusive CEO event, and other internal data, including images and PDFs, in what appears to be a significant security lapse.
The not-yet-public information was made accessible via the company’s content management system (CMS), which is used by Anthropic to publish information to sections of the company’s website.
In total, there appeared to be close to 3,000 assets linked to Anthropic’s blog that had not previously been published to the company’s public-facing news or research sites that were nonetheless publicly-accessible in this data cache, according to Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge, who Fortune asked to assess and review the material.
After Fortune informed Anthropic of the issue on Thursday, the company took steps to secure the data so that it was no longer publicly-accessible.
Prior to taking these measures, Anthropic stored all the content for its website—such as blog posts, images, and documents—in a central system that was accessible without a login. Anyone with technical knowledge could send requests to that public-facing system, asking it to return information about the files it contains.
While some of this content had not been published to Anthropic’s website, the underlying system would still return the digital assets it was storing to anyone who knew how to ask. This means unpublished material—including draft pages and internal assets—could be accessed directly.
The issue appears to stem from how the content management system (CMS) used by Anthropic works. All assets—such as logos, graphics, or research papers—that were uploaded to the central data store were public by default, unless explicitly set as private. The company appeared to have forgotten to restrict access to some documents that were not supposed to be public, resulting in the large cache of files being available in the company’s public data lake, cybersecurity professionals who analyzed the data told Fortune. Several of the company’s assets also had public browser addresses.
“An issue with one of our external CMS tools led to draft content being accessible,” an Anthropic spokesperson told Fortune. The spokesperson attributed the issue to “human error in the CMS configuration.”
There have been several high-profile cases lately of technology companies experiencing technical faults and snafus due to problems with AI-generated code or with AI agents. But Anthropic, which makes the popular Claude AI models and has boasted of automating much of its own internal software development using Claude-based AI coding agents, said AI was not at fault in this case.
The issue with its CMS was “unrelated to Claude, Cowork, or any Anthropic AI tools,” the Anthropic spokesperson said.
The company also sought to downplay the significance of some of the material that had been left unsecured. “These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture,” the spokesperson said.
While many of the documents appear to be discarded or unused assets for past blog posts, like images, banners, and logos, some of the data appeared to detail sensitive information.
The documents include details of upcoming product announcements, including information about an unreleased AI model that Anthropic said in the documents is the most capable model it has yet trained.
After being contacted by Fortune, the company acknowledged that is developing and testing with early access customers a new model that it said represented a “step change” in AI capabilities, with significantly better performance in “reasoning, coding, and cybersecurity” than prior Anthropic models.
The publicly-accessible data also included information about an upcoming, invite-only retreat for the CEOs of large European companies being held in the U.K. that Anthropic CEO Dario Amodei is scheduled to attend. An Anthropic spokesperson said the retreat was “part of an ongoing series of events we’ve hosted over the past year” and the company was “developing a general-purpose model with meaningful advances in reasoning, coding, and cybersecurity.”
Among the documents were also images that appear to be for internal use, including one image with a title that describes an employee’s “parental leave.”
It’s not the first time a tech company has inadvertently exposed internal or pre-release assets by leaving them publicly accessible before official announcements.
Apple has twice leaked information through its own website—once in 2018, when upcoming iPhone names appeared in a publicly accessible sitemap file hours before launch, and again in late 2025, when a developer discovered that Apple had shipped its redesigned App Store with debugging files left active, making the site’s entire internal code readable to anyone with a browser.
Gaming companies like Epic Games and Nintendo have also seen pre-release images, in-game assets, and other media leak via content delivery network systems (CDNs) or staging servers, similar to the data lake Anthropic used in this case. Even larger firms such as Google have accidentally exposed internal documentation at public URLs, and data associated with Tesla vehicles has been exposed through misconfigured third‑party servers.
However, the problem is likely exacerbated by AI coding tools now readily available on the market—including Anthropic’s own Claude Code.
These tools can automate crawling, pattern detection, and correlation of publicly accessible assets, making it far easier to discover this kind of content and lower the barriers to entry for doing so. AI tools like Claude Code or Codex can also generate scripts or queries that scan entire datasets, rapidly identifying patterns or file naming conventions that a human might miss.


