WHERE’S THE OFF SWITCH? AI’s HAL 9000 Problem, and What It Portends For the Future.
We are repeatedly told that large language models (LLM) will soon “plateau.” Likewise, we were repeatedly warned that “Moore’s Law,” the observation that microchip transistors double approximately every two years, was dying. But both claims miss the point.
Even as raw transistor scaling has dramatically slowed, real computing power has accelerated through architectural specialization, algorithmic efficiency, massive parallelization, and domain-specific hardware. AI supercomputer performance is currently doubling every 9 months, fueled both by increasing the number of AI chips and improving chip efficiency. A new chip from NVIDIA apparently will compress that, at least temporarily, to six or seven months.
Likewise, we will find a way around the LLM plateau. When one scaling regime saturates, engineers invent another. And even at a 9-month doubling, in 10 years the computers will crunch 10,000 times faster and a million times faster in just 15 years. Yet, it was beating us at GO a decade ago.
The implication is simple: AI won’t level off; it will change gears.
And once it changes enough gears, it stops being a tool and becomes a system, widely referred to as “Artificial General Intelligence” or AGI.
That matters, because systems do not “obey.” They optimize.
I’m still dubious about AGI, but smarter people like Michael Fumento here insist it’s just a matter of time.
So here’s to hoping we don’t get “optimized.”