Skip to the content
  • Why Vertex
    • Your Trusted Partner
    • Humanitix Case Study
    • Give Back
    • Careers
  • Penetration Testing
  • ISO27001
  • Cyber Training
  • Solutions
    • Startups, Scaleups & FinTechs
    • Small & Medium Enterprises
    • Expertise in Education
    • Cyber Security Audit
    • Incident Response
    • Managed Services
  • News
  • Contact
  • Why Vertex
    • Your Trusted Partner
    • Humanitix Case Study
    • Give Back
    • Careers
  • Penetration Testing
  • ISO27001
  • Cyber Training
  • Solutions
    • Startups, Scaleups & FinTechs
    • Small & Medium Enterprises
    • Expertise in Education
    • Cyber Security Audit
    • Incident Response
    • Managed Services
  • News
  • Contact
LOG IN

Anthropic complains of AI Data Extraction Cyber Attack using AI chat siphoning vulnerability

In the rapidly evolving landscape of artificial intelligence, a significant event recently unfolded that highlights a growing tension within the industry. In February 2026, the American artificial intelligence startup Anthropic made headlines by accusing three prominent Chinese AI laboratories—DeepSeek, Moonshot AI, and MiniMax—of conducting industrial-scale campaigns to siphon capabilities from its flagship model, Claude.

The allegations involve the use of over 24,000 fraudulent accounts and more than 16 million interactions designed to effectively “copy” the intelligence of the Claude model. This event serves as a stark reminder that in the digital age, intellectual property is no longer just about code or documents; it is about the very “reasoning” that these models provide.

The Irony of the Data Cycle

There is a profound irony in these recent accusations that has not gone unnoticed by industry observers. For years, the major players in the artificial intelligence sector have faced intense criticism and legal challenges regarding how they built their own systems. These companies were often accused of “scraping” or siphoning vast amounts of data from across the internet—including Wikipedia, GitHub, and private websites—without explicit permission or compensation for the original creators.

Now that these companies have refined that raw data into highly valuable proprietary models, they are finding themselves on the receiving end of similar tactics. The very methods of data extraction that fueled the birth of the industry are now being used by competitors to bypass the immense costs of original development. It is a cycle that raises fundamental questions about data ownership and the ethics of digital “harvesting.”

Understanding the “Distillation” Attack

The technique at the heart of this controversy is known as “model distillation.” In legitimate cybersecurity and machine learning contexts, distillation is a common practice. It involves a “teacher” model (a large, powerful system) providing outputs that are used to train a smaller “student” model. This allows businesses to create more efficient, faster versions of their own products for specific tasks.

However, a “distillation attack” occurs when a competitor uses the outputs of another company’s model without authorisation. By sending millions of highly specific queries to a target AI, an adversary can map out the underlying logic and capabilities of the teacher model. This allows them to build a rival system at a fraction of the time and cost, effectively “free-riding” on the billions of dollars spent on research and development by the original firm.

The Risks to National and Business Security

Anthropic has framed these siphoning campaigns as more than just a matter of competitive advantage; they have explicitly identified them as a national security risk. One of the concerns is that when a model is distilled in this manner, the safety “guardrails” carefully built into the original system are often lost.

Original models include protections to prevent users from generating malicious code, developing biological weapons, or conducting offensive cyber operations. Illicitly distilled models may lack these essential filters, allowing sophisticated capabilities to be weaponised by authoritarian regimes or malicious actors without any oversight.

For the average business, the risk is equally real. If your organisation relies on proprietary AI agents or domain-specific models, your unique intellectual property could be vulnerable to similar extraction techniques if your API security is not robust.

How to Enhance Your Defences

As AI becomes more integrated into business operations, protecting your digital assets requires a new layer of security posture. Consider the following strategies to help safeguard your systems:

  • Implement Advanced Rate Limiting: Restricting the number of queries a single user or account can make in a given timeframe can significantly slow down data harvesting efforts.
  • Behavioural Monitoring: Monitor API usage for patterns that do not match human behaviour. Distillation attacks often involve repetitive, highly structured queries across a vast range of topics.
  • Identity Verification: Strengthening the requirements for account creation can help prevent the mass deployment of fraudulent “bot” accounts.
  • Analyse Metadata: Identifying traffic routed through commercial proxy services or “hydra clusters” can help pinpoint coordinated extraction campaigns.

How Vertex Can Assist

Navigating the complexities of AI security and protecting your intellectual property is a continuous challenge. At Vertex, our experts are trained to identify emerging threats and help you implement tailored security solutions that protect your business from modern risks.

If you are concerned about the security of your digital infrastructure or wish to learn more about protecting your proprietary data, we encourage you to visit the Vertex website or contact our team for further information.

CATEGORIES

AI

TAGS

AI security - Anthropic - Cyber Security Best Practices - Data Privacy - Emerging Threats - Intellectual Property

SHARE

SUBSCRIBE

PrevPreviousLessons From the Drone Strikes on Amazon Data Centres: Is Your Disaster Recovery Strategy Conflict-Ready?
NextSecret Government iPhone Hacking Tools: The Coruna Toolkit Leak and the Risks to Your BusinessNext

Follow Us!

Facebook Twitter Linkedin Instagram
Cyber Security by Vertex, Sydney Australia

Your partner in Cyber Security.

Terms of Use | Privacy Policy

Accreditations & Certifications

blank
blank
blank
  • 1300 229 237
  • Suite 10 30 Atchison Street St Leonards NSW 2065
  • 477 Pitt Street Sydney NSW 2000
  • 121 King St, Melbourne VIC 3000
  • Lot Fourteen, North Terrace, Adelaide SA 5000
  • Level 2/315 Brunswick St, Fortitude Valley QLD 4006, Adelaide SA 5000

(c) 2026 Vertex Technologies Pty Ltd (ABN: 67 611 787 029). Vertex is a private company (beneficially owned by the Boyd Family Trust).

download (2)
download (4)

We acknowledge Aboriginal and Torres Strait Islander peoples as the traditional custodians of this land and pay our respects to their Ancestors and Elders, past, present and future. We acknowledge and respect the continuing culture of the Gadigal people of the Eora nation and their unique cultural and spiritual relationships to the land, waters and seas.

We acknowledge that sovereignty of this land was never ceded. Always was, always will be Aboriginal land.