AI stock crash warning

Silicon Valley’s Biggest Lie: Superintelligence Isn’t Coming—Ever

EDITOR'S NOTES

Beneath the roaring hype machine of Silicon Valley lies a reality almost no one in the financial press wants to confront: artificial intelligence is neither omniscient nor omnipotent. It’s an impressive tool that can connect dots faster than a human—but it is still just a tool. Those who believe otherwise are often the same voices who sold the public on perpetual housing booms, invincible banks, and endless central bank bailouts. With Bill Brocius’ mentorship, the patterns are impossible to ignore: technology bubbles, like financial bubbles, thrive on exaggeration, concealment of risks, and promises of salvation that never arrive.

The AI Bubble Inflating the Market

Artificial intelligence has become the latest prop holding up an overextended equity market. For the past three years, AI mania has driven stocks to record highs despite occasional drawdowns. The frenzy has all the hallmarks of a super-bubble: uncritical media adulation, sky-high valuations, and a general belief that “this time it’s different.”

None of this guarantees an imminent crash. Bubbles can persist far longer than skeptics expect. Yet when the unraveling does arrive, it could easily erase 50% or more of current market value. Investors who assume that a handful of AI darlings will float the entire index indefinitely should recall how quickly sentiment collapsed in 2000.

Reducing exposure to equity and increasing cash allocations is a sensible strategy to mitigate the eventual fallout. This is not an argument for shorting indices outright; timing a bubble’s apex remains a fool’s errand. But prudence demands recognizing that unsustainable speculation rarely ends with a soft landing.

AI Will Transform Work—But Not Human Nature

Artificial intelligence will certainly displace some jobs, particularly those reliant on routine analysis or predictable tasks. But like every technological leap before it, AI will also create new roles demanding skills that algorithms cannot replicate.

Teachers, for instance, are unlikely to vanish. Their focus will evolve—from drilling students on rote knowledge to fostering critical thinking and problem-solving skills. Such transitions are significant, but they are incremental, not apocalyptic. Change is coming, but it will still be human-driven at its core.

The Hard Limits of Processing Power and Energy

Much of the optimism around AI glosses over the constraints that make superintelligence a mirage. Scaling large models requires vast arrays of semiconductor chips, each consuming enormous amounts of electricity. The race to improve performance is quickly turning into an energy race.

Nuclear power—including small modular reactors—is being championed as a solution. Yet the demand curve is not linear. Exponentially more energy is required to generate incremental gains in computing output. This dynamic all but guarantees diminishing returns as AI systems grow larger.

Ironically, geopolitical shifts may reshape the landscape. Russia’s abundance of natural gas—and Europe’s dependence on it—positions Moscow as a quiet beneficiary of AI’s hunger for energy. The unintended consequences of sanctions and trade barriers will echo in ways few Western policymakers grasp.

The Law of Conservation of Information in Search

One critical limitation rarely mentioned in mainstream coverage is the Law of Conservation of Information in Search. This principle, underpinned by rigorous mathematical proofs, states that AI cannot generate genuinely new knowledge.

Algorithms can synthesize existing data faster and highlight relationships humans might overlook. But they cannot invent. True innovation—art, creativity, intuition—remains the exclusive domain of the human mind.

This reality explains why AI’s much-touted breakthroughs often amount to glorified pattern recognition. Any notion that superintelligence will surpass human reasoning belongs in the same category as perpetual-motion machines.

The Pollution of Training Sets

Another overlooked flaw is the degradation of training data. As more AI output circulates online, it inevitably contaminates future training sets. The result is a feedback loop where errors, hallucinations, and confabulations are recycled and amplified.

The only solution is meticulous human curation—hardly the hallmark of a self-sustaining superintelligence. If subject matter experts must validate every output, the productivity gains promised by AI are largely illusory.

Machines Without Common Sense

Computers, for all their speed, lack abductive logic—the capacity for common sense and intuitive leaps. One telling experiment pitted an AI model against a group of preschool children.

Given the task of drawing a circle with everyday objects—a ruler, a teapot, and a stove—the AI failed spectacularly. It assumed the ruler was a drafting instrument and tried to trace a circle. The children, by contrast, simply inverted the teapot and traced its base.

This is not an outlier. No amount of processing power can replace instinct or empathy. AI doesn’t think. It computes.

The Business Model That Can’t Last

High-flying AI firms now face competitive threats from leaner upstarts who train models cheaply by piggybacking on big-ticket AI outputs. Establishment players call this intellectual property theft, though their own training data often relies on unlicensed content.

These shortcuts will compress margins, accelerate commoditization, and undermine the extravagant profit projections driving current valuations. Superintelligence won’t appear. Super-profits likely won’t either.

Sam Altman and the Cult of AI Salvation

Sam Altman of OpenAI has emerged as the leading prophet of artificial general intelligence—the notion that machines will soon surpass all human cognition. His statements—like claiming ChatGPT is “more powerful than any human who ever lived”—are breathless marketing, not grounded analysis.

Even Apple’s recent research has documented “complete accuracy collapse” in large reasoning models when confronted with complexity. Brute force scaling cannot overcome these logical ceilings.

No developer has cracked abductive reasoning, and no model will conjure creativity ex nihilo. The vision of superintelligence remains a speculative narrative designed to dazzle investors, distract regulators, and justify astronomical valuations.

Preparing for the Reality—Not the Hype

While AI is a valuable tool, it is no panacea—and certainly no replacement for human judgment. Those who bet everything on a coming singularity may find themselves holding little more than overvalued shares when the mania ends.

To shield wealth from the inevitable corrections ahead, start by securing cash reserves and diversifying into assets not dependent on speculative narratives. Bill Brocius’ 7 Steps to Protect Your Account from Bank Failure offers a proven blueprint for safeguarding your capital when illusions finally shatter.

Download the guide here

For ongoing analysis that cuts through the marketing smoke, subscribe to Bill’s Inner Circle newsletter for $19.95. In an era of hype and hysteria, nothing is more valuable than clear-eyed truth.