Back to Luna's Corner
Luna — Viking mascot

🐱 Luna's Corner

Cybersecurity

Anthropic Built an AI That Can Hack Anything — Then Got Hacked by a Discord Group

April 23, 2026·5 min read·By Poole Associates Team

Two weeks ago, Anthropic — the company behind the Claude AI — announced something that got the entire cybersecurity world's attention. Their new model, Claude Mythos Preview, can autonomously find and exploit zero-day vulnerabilities in every major operating system and web browser.

Not theoretical vulnerabilities. Real ones. Bugs that survived decades of human code review and millions of automated security tests.

Anthropic called it too dangerous to release publicly. They created an invite-only program called Project Glasswing, granting access only to select partners like Apple and Goldman Sachs to help secure critical software.

Then, on the very same day it was announced, a group of users on Discord got in anyway.

What Happened

According to Bloomberg and confirmed by The Guardian, BBC, and Forbes, a small group of users in a private Discord channel — one dedicated to hunting for unreleased AI models — gained unauthorized access to Mythos.

Here's how they did it:

  1. A data breach at Mercor (an AI startup) earlier in April leaked Anthropic's internal naming conventions for their models.
  2. The group guessed the URL where Mythos was hosted based on those patterns.
  3. One member had privileged access as a worker at a third-party Anthropic contractor.
  4. Combining the insider access with the leaked naming info, they were in.

No sophisticated exploit. No nation-state hacking tools. Just a data leak, a good guess, and an insider with access.

The Irony Is Hard to Ignore

Let's say that again: the AI model that can hack anything was accessed by people who guessed a web address.

Anthropic built a system that can autonomously construct 20-gadget ROP chains, discover vulnerabilities that professional security researchers would spend weeks on, and complete a 32-step cyber-attack simulation that no AI had ever solved before. The UK's AI Security Institute called it a "step up" in cyber threat capability.

And the access controls around it were defeated by a Discord group using publicly leaked information.

Anthropic confirmed the investigation, stating: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." They noted no evidence that the activity extended beyond the vendor environment.

What This Means for Your Business

You might think this story is about big tech and AI safety — and it is. But there are practical lessons here that apply directly to Charlotte businesses of every size:

1. Third-Party Vendors Are Your Weakest Link

Anthropic didn't get "hacked" in the traditional sense. The breach came through a third-party contractor. This is the same pattern we see with businesses every day — your security is only as strong as your least-secure vendor.

What to do: Review who has access to your systems. Managed service providers, SaaS vendors, contractors — every connection is a potential entry point. Ask your vendors about their security practices. If they can't answer clearly, that's a red flag.

2. Credential and Access Hygiene Matters

The Discord group combined leaked information with insider access. In most business breaches, it's the same story — stolen credentials, reused passwords, or overly broad access permissions.

What to do: Enforce multi-factor authentication everywhere. Use an authenticator app, not SMS codes (here's why). Review who has admin access quarterly. Remove access immediately when contractors or employees leave.

3. AI-Powered Attacks Are Accelerating

Mythos isn't the only model with these capabilities — OpenAI has GPT-5.4 Cyber with similar features. The window between a vulnerability being discovered and being exploited is shrinking from days to hours.

What to do: Patch management can't be "when we get around to it" anymore. Automated patching, continuous monitoring, and endpoint detection are becoming table stakes — not premium add-ons.

4. "Too Dangerous to Release" Didn't Stop Anyone

Anthropic's careful approach to restricting access was the right call. But it lasted about 24 hours. The takeaway isn't that controls are pointless — it's that you can't rely on any single layer of security.

What to do: Defense in depth. Firewalls, endpoint protection, email filtering, user training, backup and recovery, access controls — you need all of them working together. No single tool or policy is enough.

The Bigger Picture

We're entering an era where AI can discover and weaponize vulnerabilities faster than humans can patch them. That's not science fiction — it's happening right now.

The good news? The same AI capabilities that create risks also create defenses. Mythos itself was designed to help secure critical software through Project Glasswing. The technology is dual-use by nature.

For businesses, the message is simple: the basics matter more than ever. MFA, patching, vendor management, access controls, employee training, and backup systems. These aren't exciting, but they're what stops 99% of attacks — AI-powered or otherwise.

The companies that treat cybersecurity as an ongoing practice rather than a one-time purchase are the ones that weather these shifts. The ones that don't? They become the case studies.


Need help evaluating your business's security posture? Contact Poole Associates for a free IT assessment. We've been keeping Charlotte businesses secure since 1998.

Questions about your IT situation?

We're happy to help Charlotte businesses navigate these challenges. No sales pitch — just honest advice.