Skip to the content
  • Why Vertex
    • Expertise in Education
    • Your Trusted Partner
    • Humanitix Case Study
    • Give Back
    • Careers
  • Penetration Testing
  • ISO27001
  • Cyber Training
  • Solutions
    • Cyber Security Audit
    • Incident Response
    • Managed Services
  • News
  • Contact
  • Why Vertex
    • Expertise in Education
    • Your Trusted Partner
    • Humanitix Case Study
    • Give Back
    • Careers
  • Penetration Testing
  • ISO27001
  • Cyber Training
  • Solutions
    • Cyber Security Audit
    • Incident Response
    • Managed Services
  • News
  • Contact
LOG IN

Are You Relying on General AI for Your Cyber Security? Why That Could Be a Major Mistake

Artificial Intelligence (AI) is rapidly integrating into every corner of our professional lives. It can write code, draft marketing copy, and summarise complex reports. It is tempting to believe we can now simply ask a chatbot, “How do I secure my business?” and receive a comprehensive, expert-level plan.

Unfortunately, this approach is dangerously flawed.

Cyber security is not a static task; it is a high-stakes, adversarial game. On one side, you have your business’s defences. On the other, you have active, creative, and relentless attackers working around the clock to find a way in.

Relying on a general-purpose AI chatbot to manage your security is like asking a casual chess player to compete in a world championship. Worse, it is like asking a machine that can only predict the next word to play that game. The results are not just below average; they can be chaotic and catastrophic.

The AI Chess Match: A Lesson in Failure

To understand this risk, we only need to look at how general AI models play actual chess. In a recent, widely publicised event, chess Grandmasters like Magnus Carlsen played blindfolded against ChatGPT. The AI did not just lose; it failed spectacularly.

As detailed in an article by Chess.com, the AI, which is a Large Language Model (LLM) and not a dedicated chess engine, fundamentally misunderstood the game. It “forgot” where pieces were, attempted to make numerous illegal moves, and its strategy quickly descended into chaos.

Why? Because the AI was not “playing chess.” It was statistically predicting the most likely text to follow in a conversation about chess. It was generating an “average” move, not the “best” move. In an adversarial game, “average” is a guaranteed loss.

Why General AI Fails at Cyber Security

This same flaw applies directly to cyber security. When you ask a general AI for security advice, you are not getting an expert strategy. You are getting an “average” of all the security-related text it was trained on.

As we discussed in our previous blog, “Why Your AI Chatbot Sounds So… Average”, these AI models are “averaging machines.” They are a form of “lossy compression,” where specific, granular, and expert-level details are smoothed out and lost.

This “averaging error” is dangerous in cyber security for several reasons:

  • It Provides Outdated Information: An AI model’s knowledge is frozen in time. It might recommend a security measure that was considered best practice two years ago but is now known to have a critical vulnerability.
  • It “Hallucinates” Plausible-Sounding Nonsense: An AI might confidently invent a security process or a line of code that “looks” correct but is completely false or, worse, insecure. This is the cyber equivalent of the AI making an illegal move in chess.
  • It Lacks Context: An AI does not understand your specific business, your risk appetite, or your operational needs. It might suggest “Fort Knox” security that is so restrictive it “breaks the business,” making it impossible for your staff to be agile, efficient, and flexible.
  • It Can Be Poisoned: Attackers know that people are turning to AI. They can “poison” the well by flooding the internet with incorrect security advice, which the AI then learns and repeats as fact.

The “Better Than Nothing” Fallacy

A common argument is that if a business has no cyber security, using AI to get some starting steps is better than nothing. While this is technically true, it creates a deep and dangerous false sense of security.

You may feel protected because you have implemented an AI’s advice, but you are likely protected only against “average” attacks. Your “AI-generated” defence is predictable, generic, and precisely what a skilled adversary expects and knows how to bypass.

From “Average” to Expert Defence

Do not outsource your entire security strategy to a general chatbot. It is a generalist, an “averaging machine,” and it is guaranteed to fail in a specific, adversarial fight against a determined human attacker.

The most effective cyber security posture combines the power of critical thinking and contextual understanding of human experts.

Instead of asking an “average” machine for a plan, we recommend starting with cyber experts who can understand your unique business. At Vertex Cyber Security, we can help you build a robust security strategy that uses the right, modern tools effectively—without wasting your budget or breaking your business operations.

Contact Vertex Cyber Security today to move beyond “average” and implement an expert-driven cyber security strategy.

CATEGORIES

AI - Cyber Security

TAGS

AI - Artificial Intelligence - Averaging Machine - chatgpt - cyber security - Data Poisoning - LLM - risk - Threat Detection

SHARE

PrevPreviousNuclear Power as a Backup: Rethinking Your Disaster Recovery Strategy

Follow Us!

Facebook Twitter Linkedin Instagram
Cyber Security by Vertex, Sydney Australia

Your partner in Cyber Security.

Terms of Use | Privacy Policy

Accreditations & Certifications

blank
blank
blank
blank
  • 1300 229 237
  • Suite 10 30 Atchison Street St Leonards NSW 2065
  • 477 Pitt Street Sydney NSW 2000
  • 121 King St, Melbourne VIC 3000
  • Lot Fourteen, North Terrace, Adelaide SA 5000
  • Level 2/315 Brunswick St, Fortitude Valley QLD 4006, Adelaide SA 5000

(c) 2025 Vertex Technologies Pty Ltd.

download (2)
download (4)

We acknowledge Aboriginal and Torres Strait Islander peoples as the traditional custodians of this land and pay our respects to their Ancestors and Elders, past, present and future. We acknowledge and respect the continuing culture of the Gadigal people of the Eora nation and their unique cultural and spiritual relationships to the land, waters and seas.

We acknowledge that sovereignty of this land was never ceded. Always was, always will be Aboriginal land.