Artificial Intelligence (AI) is rapidly integrating into every corner of our professional lives. It can write code, draft marketing copy, and summarise complex reports. It is tempting to believe we can now simply ask a chatbot, “How do I secure my business?” and receive a comprehensive, expert-level plan.
Unfortunately, this approach is dangerously flawed.
Cyber security is not a static task; it is a high-stakes, adversarial game. On one side, you have your business’s defences. On the other, you have active, creative, and relentless attackers working around the clock to find a way in.
Relying on a general-purpose AI chatbot to manage your security is like asking a casual chess player to compete in a world championship. Worse, it is like asking a machine that can only predict the next word to play that game. The results are not just below average; they can be chaotic and catastrophic.
The AI Chess Match: A Lesson in Failure
To understand this risk, we only need to look at how general AI models play actual chess. In a recent, widely publicised event, chess Grandmasters like Magnus Carlsen played blindfolded against ChatGPT. The AI did not just lose; it failed spectacularly.
As detailed in an article by Chess.com, the AI, which is a Large Language Model (LLM) and not a dedicated chess engine, fundamentally misunderstood the game. It “forgot” where pieces were, attempted to make numerous illegal moves, and its strategy quickly descended into chaos.
Why? Because the AI was not “playing chess.” It was statistically predicting the most likely text to follow in a conversation about chess. It was generating an “average” move, not the “best” move. In an adversarial game, “average” is a guaranteed loss.
Why General AI Fails at Cyber Security
This same flaw applies directly to cyber security. When you ask a general AI for security advice, you are not getting an expert strategy. You are getting an “average” of all the security-related text it was trained on.
As we discussed in our previous blog, “Why Your AI Chatbot Sounds So… Average”, these AI models are “averaging machines.” They are a form of “lossy compression,” where specific, granular, and expert-level details are smoothed out and lost.
This “averaging error” is dangerous in cyber security for several reasons:
- It Provides Outdated Information: An AI model’s knowledge is frozen in time. It might recommend a security measure that was considered best practice two years ago but is now known to have a critical vulnerability.
- It “Hallucinates” Plausible-Sounding Nonsense: An AI might confidently invent a security process or a line of code that “looks” correct but is completely false or, worse, insecure. This is the cyber equivalent of the AI making an illegal move in chess.
- It Lacks Context: An AI does not understand your specific business, your risk appetite, or your operational needs. It might suggest “Fort Knox” security that is so restrictive it “breaks the business,” making it impossible for your staff to be agile, efficient, and flexible.
- It Can Be Poisoned: Attackers know that people are turning to AI. They can “poison” the well by flooding the internet with incorrect security advice, which the AI then learns and repeats as fact.
The “Better Than Nothing” Fallacy
A common argument is that if a business has no cyber security, using AI to get some starting steps is better than nothing. While this is technically true, it creates a deep and dangerous false sense of security.
You may feel protected because you have implemented an AI’s advice, but you are likely protected only against “average” attacks. Your “AI-generated” defence is predictable, generic, and precisely what a skilled adversary expects and knows how to bypass.
From “Average” to Expert Defence
Do not outsource your entire security strategy to a general chatbot. It is a generalist, an “averaging machine,” and it is guaranteed to fail in a specific, adversarial fight against a determined human attacker.
The most effective cyber security posture combines the power of critical thinking and contextual understanding of human experts.
Instead of asking an “average” machine for a plan, we recommend starting with cyber experts who can understand your unique business. At Vertex Cyber Security, we can help you build a robust security strategy that uses the right, modern tools effectively—without wasting your budget or breaking your business operations.
Contact Vertex Cyber Security today to move beyond “average” and implement an expert-driven cyber security strategy.
