Skip to the content
  • Why Vertex
    • Startups, Scaleups & FinTechs
    • Expertise in Education
    • Your Trusted Partner
    • Humanitix Case Study
    • Give Back
    • Careers
  • Penetration Testing
  • ISO27001
  • Cyber Training
  • Solutions
    • Cyber Security Audit
    • Incident Response
    • Managed Services
  • News
  • Contact
  • Why Vertex
    • Startups, Scaleups & FinTechs
    • Expertise in Education
    • Your Trusted Partner
    • Humanitix Case Study
    • Give Back
    • Careers
  • Penetration Testing
  • ISO27001
  • Cyber Training
  • Solutions
    • Cyber Security Audit
    • Incident Response
    • Managed Services
  • News
  • Contact
LOG IN

The Exponential Problem: Why More Billions Won’t Magic Up General AI

The current drive towards Artificial General Intelligence (AGI) a system with human-level intelligence across all domains is fueled by enormous capital investment. Yet, the narrative often feels disconnected from the tangible progress being made. We are spending billions, hiring thousands of experts, and scaling models to previously unimaginable sizes, but the incremental improvements hardly justify the exponential resources consumed.

This raises a crucial question: are investors funding genuine scientific advancement, or merely a “false prediction of magic” that is no more accurate than the decades-old promise of flying cars from science fiction like Back to the Future or The Jetsons? The persistent extension of the AGI timeline by its own prophets suggests the problem is far harder than anticipated.

A Telling Sign: The Infrastructure Arms Race

Perhaps the clearest indication that AGI is nowhere near is the colossal investment AI companies are making in outdated, costly, and resource-intensive infrastructure. Major tech firms are spending billions on capital expenditure, primarily to secure power and land for vast data centres.

  • Acquiring Power and Land: Hyperscalers are actively purchasing thousands of acres of land and committing to gigawatt-scale data centre campuses, requiring a massive build-out of new power grid infrastructure. Some of these new data centres are expected to consume power equivalent to large cities.
  • Betting on Current Energy Solutions: Tech giants are signing 20-year deals for existing nuclear fission power, and investing in nascent technologies like Small Modular Reactors (SMRs) and nuclear fusion.
  • The AGI Irony: If these companies genuinely believed AGI was imminent, their investment strategy would be fundamentally different. They would expect AGI to solve the current engineering bottlenecks—optimising power consumption, creating radically better chip designs, perfecting fusion, or inventing vastly more efficient batteries—thereby making these multi-billion-dollar investments in today’s technology obsolete within a few years. The fact that they are building massive, power-hungry data centres and securing current energy supply for decades is an implicit acknowledgment that the path to AGI is long, hard, and relies on brute-force scaling of existing technology.

AI Models are Data Compression, with Mathematical Limits

A core insight from information theory is that all intelligence is fundamentally related to data compression. Large Language Models (LLMs) and other neural networks are sophisticated, highly effective forms of lossy compression, converting vast training data (text, images, code) into a compact, weighted graph of parameters.

The theoretical basis for compression has mathematical limits, defined by Shannon’s Source Coding Theorem and the concept of Kolmogorov complexity. You simply cannot compress random data, and there is an inherent maximum to how much you can compress even highly structured data.

While current AI is exceptionally good at finding statistical redundancy and patterns to achieve impressive compression rates, it faces a profound constraint:

  • The Problem is Computational Complexity: Building a model that can perfectly compress the entire complexity of the human world and reason effectively may be an extremely hard mathematical problem, likely falling into the realm of NP-hard problems. Such problems can be partially improved through smart algorithms (heuristics), but no efficient shortcut is known to guarantee the absolute best solution. Scaling up computation does not magically negate this mathematical limit.

Why Brute Force Fails: The Wi-Fi and Shakespeare Analogies

The strategy of simply throwing exponentially more money, chips, and data at the current models is akin to ignoring the laws of physics, diminishing returns and probability:

  • The Wi-Fi Analogy: Imagine trying to build a global communications network by constantly upgrading a single Wi-Fi hotspot, expecting it to connect everyone on Earth. You can make the signal stronger and the hardware faster, but the physical limitations of signal decay, interference, and power consumption eventually make it practically impossible to scale that one device to serve the entire planet simultaneously. True global connectivity required new architectures and co-operation, not just a bigger router.
  • The Infinite Monkey Theorem: The current approach of funding hundreds of people to make random, incremental changes to the core AI algorithm is not a guaranteed path to a breakthrough. It resembles the famous Infinite Monkey Theorem, which posits that a monkey randomly hitting keys on a typewriter for an infinite amount of time will eventually produce the works of William Shakespeare. While mathematically possible in an infinite sense, in a finite universe, the probability is unfathomably small. We are spending billions on “monkeys” when a fundamental, revolutionary scientific insight is what is truly required.

A Call for Cooperation: The AI Consortium Model

Instead of an expensive, secretive arms race that duplicates effort and is limited by diminishing returns, the industry could benefit significantly from adopting a co-operative model, much like the consortiums that drove global standards for Wi-Fi, Ethernet, and USB:

  • Standardising Trust and Ethics: A global AI consortium could establish horizontal standards for trustworthiness, transparency, and explainability. This would allow different proprietary systems to interoperate reliably and be subjected to a common set of audit practices.
  • Focus on Foundational Science: By pooling resources, a consortium could fund pure research into AI improvements that are collectively shared.
  • Resource Efficiency: It would move focus away from building constantly outdated, power-hungry data centres based on today’s chips, and instead direct resources toward collaboratively improving the foundational science, which ultimately yields greater efficiency and longevity.

For businesses, the lesson is clear: do not bet your security on a distant, speculative AGI breakthrough.

At Vertex Cyber Security, we focus on high-quality, professional implementation of established, mathematically sound security practices. True security is achieved through effective, practical controls, not hopeful predictions of technological magic.

Contact the expert team at Vertex Cyber Security for tailored, high-quality Cyber Security that prioritise genuine protection.

CATEGORIES

Uncategorised

TAGS

AGI limits - AI consortium - AI scaling - Artificial General Intelligence - computational limits - diminishing returns

SHARE

PrevPreviousVanta vs Drata vs Sprinto vs Scrut Comparison
NextThe Hard Truth: Your New Cyber Compliance Platform Might Be a Multi-Year Trap (And Why You Should Cancel).Next

Follow Us!

Facebook Twitter Linkedin Instagram
Cyber Security by Vertex, Sydney Australia

Your partner in Cyber Security.

Terms of Use | Privacy Policy

Accreditations & Certifications

blank
blank
blank
blank
  • 1300 229 237
  • Suite 10 30 Atchison Street St Leonards NSW 2065
  • 477 Pitt Street Sydney NSW 2000
  • 121 King St, Melbourne VIC 3000
  • Lot Fourteen, North Terrace, Adelaide SA 5000
  • Level 2/315 Brunswick St, Fortitude Valley QLD 4006, Adelaide SA 5000

(c) 2025 Vertex Technologies Pty Ltd.

download (2)
download (4)

We acknowledge Aboriginal and Torres Strait Islander peoples as the traditional custodians of this land and pay our respects to their Ancestors and Elders, past, present and future. We acknowledge and respect the continuing culture of the Gadigal people of the Eora nation and their unique cultural and spiritual relationships to the land, waters and seas.

We acknowledge that sovereignty of this land was never ceded. Always was, always will be Aboriginal land.