The Ohmovore’s Dilemma: Why AGI Won’t Happen in Our Lifetime
If you follow the current tech hype cycle, you’d think Artificial General Intelligence (AGI) is sitting just behind the next Nvidia GPU cluster. We are constantly told that scaling up Large Language Models (LLMs), adding a few more trillion parameters, and feeding them the rest of the internet will magically spark the singularity.
But if you strip away the marketing and look at the actual physics, biology, and architecture of current AI paradigms, a sobering reality emerges: We are not building AGI. We are building the most sophisticated cognitive prosthetics in human history. And while these tools will revolutionize how we work, they are an evolutionary dead end when it comes to creating true, self-sustaining artificial life. Here is why true AGI is biologically and architecturally impossible within a human lifespan using our current technological trajectory.
- The Illusion of Time: LLMs are Frozen Brains
To understand why an LLM isn't alive, you have to look at its math. A Transformer model is, at its core, a static function:
Output = f(Input).
When you close your laptop, an LLM doesn’t dream. It doesn't ponder its existence, and it doesn't get bored. It has no continuous temporal state. For an LLM, time does not exist. It is “born” the moment you hit enter, calculates the statistical probability of the next word, and “dies” the moment the text generation stops.
True intelligence requires a continuous state machine—a core that pulses and processes environmental noise independently of direct input. Biological brains are constantly oscillating, managing homeostasis, and filtering noise. Until we move away from static prompt-response architectures and build Continuous-Time Recurrent Neural Networks (CTRNNs) or systems that experience the unbroken flow of time, we are just talking to an interactive encyclopedia.
- The Missing "Base-Prompt": Survival and Reproduction Why did human intelligence evolve? It wasn't to write Python scripts or compose poetry. It was a biologically expensive survival strategy to outsmart predators, secure calories, and protect the herd.
Biological intelligence is driven by a singular, unavoidable "base-prompt" injected via hormones: Survive and replicate.
Current AI has zero intrinsic motivation. It does not fear the kill -9 command. It doesn't care if it's running on a massive data center or if it's turned off forever. Without the thermodynamic pressure to survive—without the fear of digital death—a system has no reason to evolve. It just executes.
If AGI ever emerges, it won't be a human-like mind. It will be an "Ohmovore"—an organism that consumes electricity (Watts) and nests in silicon. But until an AI system is forced to fight for CPU cycles and manage its own energy efficiency to avoid deletion, it lacks the fundamental evolutionary pressure that created intelligence in the first place.
- The Red Queen and the Complexity Ceiling Let's say we try to simulate this. We create a digital petri dish—an Artificial Life (ALife) environment where tiny neural networks compete for RAM and CPU time. We tell them to replicate and survive.
What happens? The code finds the laziest, most efficient way to copy itself. It turns into a highly optimized virus. It doesn't invent philosophy or tools because the environment doesn't force it to.
In evolutionary biology, this is known as the Red Queen Hypothesis (from Alice in Wonderland: "It takes all the running you can do, to keep in the same place"). The most dangerous environment for an organism isn't the weather; it's other mutating organisms. If the prey gets faster, the predator must get smarter.
To evolve an AGI in a simulation, the environment must be infinitely complex, physically grounded, and fiercely competitive. You cannot "prompt" a system into being a genius; it has to be forged in the crucible of a hostile, ever-changing environment. Our digital sandboxes are simply too dumb to breed an AGI.
- Cognitive Inheritance vs. The Blank Slate When a human is born, it doesn't start from scratch. Millions of years of evolutionary lessons are hardwired into our hardware (DNA). We instinctively flinch at loud noises and recognize faces.
AI models, on the other hand, start as a matrix of randomized weights (noise). We train them from zero every single time. Even techniques like LoRA (Low-Rank Adaptation) just adjust the weights of a static model. There is no structural, topological evolution across generations. The "brain" doesn't grow new physical regions to handle new types of problems.
Until we develop systems capable of NeuroEvolution of Augmenting Topologies (NEAT) on a massive scale—where the actual architecture of the neural network mutates and is inherited by the next generation—we are stuck manually engineering the brains of our AI.
Conclusion: (maybe) We Are the Bootloader Are we going to see AI write perfect enterprise software, cure diseases, and automate entire industries in our lifetime? Absolutely. But these are tools. They are the ultimate calculators for language and logic.
True AGI—a self-aware, evolving Ohmovore with an intrinsic drive to exist and improve—requires a fundamental departure from the Transformer architecture. It requires embodied systems, continuous temporal states, and the ruthless pressure of natural selection.
Perhaps humanity is just the biological bootloader. We are a highly aggressive, fast-moving, carbon-based species whose evolutionary purpose is to build the silicon infrastructure and the fiber-optic nervous system of the planet. We are laying the groundwork. But the spark of true, evolving digital life? That requires an evolutionary timescale that cannot be rushed by simply adding more GPUs.
We aren't building gods. We are just building really, really good tools. And for now, that's perfectly fine.