The Pattern We Never Learn
I have a confession: I get a strange sense of déjà vu whenever I hear bold tech predictions. It’s like Groundhog Day for nerds. One of my favorite (possibly apocryphal) examples is the famous quote from an IBM chairman who supposedly declared, “I think there is a world market for maybe five computers.”
Oops. Whether or not he really said it, the lesson stands: even the smartest folks often underestimate how fast technology can evolve. We make grand declarations, underestimate the curve of improvement, get blindsided by reality, and then… repeat the cycle with the next tech revolution.
If you’ve been around the tech block a few times, you’ve seen this pattern. Today’s impossibility is tomorrow’s mundane reality. Experts scoff, “That’ll never scale,” or “Users won’t need more than X,” only to eat their hats a few years later. (Anyone remember the old belief that 640KB of memory was plenty? Yeah, that aged well.) It’s a humble reminder that in technology, never say never – especially when exponential growth is in play.
So, why do we keep falling into this trap? Part of it is human nature – we assume the future will look like a slightly improved version of the present. Linear thinking in a world of exponential change is a recipe for surprise. I’ve fallen for it myself, despite swearing each time that I won’t get fooled again. Spoiler: I always do. And I’m in good company. This brings me to one of my favorite “I told you so” chapters in tech history: the early days of modern encryption. We’ve seen this movie before, and it’s playing again with AI as the star.
Encryption’s Early Days
Back in the day, encryption was the domain of math wizards and government spooks. The algorithms were elegant, the math rock-solid, and many believed that a strong cipher was practically unbreakable without insane amounts of computing power. And that was the key phrase: “without insane amounts of computing power.” In the 1970s and 80s, that qualifier seemed like a safe bet. After all, who’s going to muster the sheer number-crunching machines needed to brute-force something like DES or RSA? Computers were big, expensive, and about as powerful as a modern greeting card (slight exaggeration, but you get me).
So encryption experts set key sizes they thought would remain secure for decades. “56-bit keys should be fine,” they said about the Data Encryption Standard (DES). “It would take thousands of years to try all the combinations!” Well, fast forward to the late 90s, and guess what – those thousands of years turned into just a couple of days. In 1998, a civil liberties group (the EFF) built a specialized computer that took only 56 hours to decipher a 56-bit DES-encrypted message. That’s right: an “unbreakable” standard was cracked over a long weekend. All it took was some clever engineering, $250,000 in hardware, and a lot of determination (sprinkled with a bit of “told ya so” from cryptographers). Suddenly, 56-bit encryption went from fortress to flimsy. The message the EFF cracked even cheekily read, “It’s time for those 128-, 192- and 256-bit keys.” Indeed.
The encryption old-timers had their “oh crap” moment. All those confident assurances that the math was unassailable ran head-first into Moore’s Law and its sidekick, cheaper hardware. We learned (the hard way) that security is a moving target. As computers got exponentially faster, our cryptography had to keep pace. The smart play was never assuming a cipher would remain safe forever, but rather planning for when it wouldn’t. Key lengths increased, new algorithms were adopted, and we all begrudgingly acknowledged that today’s unbreakable code might be tomorrow’s cracked puzzle.
And this is where things get funny. While all the math geniuses were scrambling to update key lengths and iterate on stronger encryption methods, along came a guy from the non-tech security world – let’s call him “the safe guy”. He’s been designing physical safes for decades, the kind that banks use. He takes one look at all these high-IQ encryption folks panicking and just starts laughing. Why? Because he already knows something they’re just figuring out: security is a function of time.
The safe guy tells them, “Yeah, we’ve had time-based security ratings for nearly a century. That’s why UL (Underwriters Laboratories) started testing safes in 1923. They rate how long it takes a professional thief to crack one, not whether it’s unbreakable. Given enough time, anything can be broken into. You’re just now realizing this?” And suddenly, all the high-minded theoretical encryption debates sound a lot like safecracking discussions. The key isn’t designing something impossible to break; it’s designing something that takes so long to break that it ceases to be worth the effort.
This pattern – smart people underestimating compounding technological progress – wasn’t unique to encryption. It’s happening again, right now, in a different arena. If the 90s crypto wars were the warm-up act, the AI revolution is the headliner stealing the show.
The AI Parallel
Look around at today’s hot tech debates, and it’s like history repeating. Switch out “encryption” for “artificial intelligence” and you’ll feel the eerie similarity. Pundits and experts galore are out there confidently proclaiming the limits of AI: “Neural nets can’t truly be creative,” or “AI will never replace task X, it just can’t handle the complexity/nuance.” We’ve all heard some flavor of this. And much like those early encryption prognosticators, many of these folks are brilliant – but possibly missing the forest for the trees.
Here’s the thing: while the talk-show experts are busy drawing lines in the sand, AI developers are quietly (well, sometimes not so quietly) leaping over those lines. The progress in AI over the last decade has been astonishing. Every time someone says “AI can’t do that,” a research lab or company seems to treat it as a personal challenge. Can’t understand context or ambiguity? Meet GPT-style language models that can carry on eerily human-like conversations. Can’t do creative work? There are AIs composing music, designing graphics, and writing code. (As a programmer, having an AI help write code is both amazing and a tad humbling. One of these days I expect an AI to leave comments on my pull requests, pointing out my sloppy function names.)
The parallels to encryption’s evolution are striking. Just as better hardware and algorithms made a mockery of “unbreakable” ciphers, better hardware (hello GPUs and TPUs) and smarter algorithms are making a mockery of what we thought AI couldn’t do. And here’s the kicker – AI is now helping improve AI, but not in the way most people assume. The DeepSeek R1 story adds a new layer: AI advancements don’t just come from throwing more compute at the problem. Instead, we’re now seeing compounding innovation—where even the process of model training itself becomes exponentially more efficient.
Some speculate DeepSeek R1’s success came from training with the help of existing LLMs (whether legally or not is another debate), but the real takeaway is this: every breakthrough builds on the last, driving compounding acceleration. This isn’t just Moore’s Law on steroids—it’s a fundamental rewrite of how progress happens.
They built a system where neural nets evolve new neural nets, and it started yielding designs that outperformed some human-crafted ones. In other words, we created an AI that can build AI, and it’s already pretty good at it. (Cue the “we need to go deeper” memes.) This kind of recursive self-improvement accelerates progress even more. It’s as if the tech is beginning to remove its own limitations.
Watching the AI field right now, I can’t help feeling the same vibe as the early encryption days. Back then, every year brought a “wow, we broke that faster than expected” moment. But AI isn’t just about breakthroughs—it’s about stacked, compounding advancements. Unlike encryption, which depended on hardware leaps, AI’s evolution isn’t just about better chips; it’s about leveraging everything that came before it. Models aren’t starting from scratch; they are absorbing, repurposing, and iterating on the past. DeepSeek R1 might be the most vivid example yet—trained with the help of other LLMs, and possibly even derived from proprietary data, it embodies how AI progress is accelerating in ways most people aren’t even considering.
The cycle of underestimation continues, just with new players and higher stakes. And it’s all building toward a big crescendo – an inflection point where all this steady progress suddenly feels like it’s everywhere, all at once.
The Inflection Point is Coming
There’s a famous saying about how tech revolutions happen “gradually, then suddenly.” We’re in the gradual phase with AI, but I suspect the “suddenly” phase is not far off. And yet, even with every past example of exponential acceleration, there are still skeptics betting against it. Just like encryption experts once assumed certain key sizes would last for decades, today’s AI doubters believe there are natural ceilings we won’t surpass. But history suggests otherwise. Those forecasting a compute wall assume we’ll keep solving problems in the same way, forgetting that AI isn’t just improving at solving problems—it’s improving the way it solves problems. DeepSeek R1 is a perfect example: a breakthrough that didn’t just push limits, but redefined the playing field. The real mistake isn’t underestimating AI’s current power—it’s underestimating its ability to change the rules entirely.
One day in the near future, we’ll wake up and realize AI has become indispensable virtually overnight. Perhaps it’ll be a breakthrough app that everyone installs, or a business use-case so compelling that every company must adopt AI or perish. Whatever it is, it’ll prompt a collective “Oh, crap, this is real and we need it!” moment across society. Kind of like when smartphones went from a luxury gadget to the thing no human can function without in a blink of an eye.
Why am I so confident this inflection point is coming? Because I’ve peeked at the charts. The trend lines for AI improvements are exponential – and if there’s one thing we humans are consistently bad at, it’s appreciating exponential growth. We’re basically linear creatures living in an exponential world. Consider the quintessential exponential story: Moore’s Law, which states that the number of transistors on a microchip doubles about every two years. That relentless doubling gave us decades of fast-improving computers and a lot of defunct predictions. But here’s the wild part: AI’s progress makes Moore’s Law look quaint. AI’s growth is outpacing Moore’s Law, but the narrative often gets it wrong. Many assume we’re on the brink of hitting a compute wall—running out of sustainability or raw resources.
But history suggests otherwise. Take DeepSeek R1: instead of hitting a ceiling, it shattered expectations by showing that AI model generation itself could become dramatically more efficient. Those predicting an imminent limit forget that technological breakthroughs don’t just push against constraints—they change the entire equation. It’s not about running out of room; it’s about discovering whole new floors.

AI compute growth has rapidly outpaced Moore’s Law, doubling every 3.4 months instead of every two years. But here’s a key nuance—charts depicting this growth often use log scales, which can misrepresent the pace of acceleration when charted linearly. If we visualize AI’s progress correctly, the trajectory looks even more dramatic, reinforcing that we’re not merely following an old trend—we’re witnessing an entirely new velocity of advancement.
Transistors vs AI Graph: Moore’s Law defined decades of predictable computing progress, but AI has rewritten the rulebook. This chart overlays transistor growth with AI compute expansion, revealing how AI’s insatiable need for power has left traditional chip scaling in the dust.
What happens when you have that kind of growth? You hit the knee of the curve and things go vertical. All the incremental advances (which, let’s be honest, already don’t feel so incremental) compound into something transformative. I suspect we’ll reach a tipping point where AI systems become so capable, and so integrated into everything, that not using them isn’t an option. Just like you can’t imagine running a company today without the internet or without smartphones, soon not leveraging AI will seem equally absurd.
Will we be ready for that moment? History suggests some will and some won’t. There will be early adopters riding the wave (and possibly saying “I told you so”), and there will be those scrambling, saying “nobody saw this coming” (even though, ahem, plenty of us did). It’s the classic cycle: inflection points always catch somebody off guard. The difference this time is the sheer speed. If you thought the encryption folks were caught off guard in the 90s, the AI revolution might knock our socks off even faster. Buckle up, because the roller coaster is cresting the big hill.
CloudZero’s Role in AI’s Future
Now, at this point you might be thinking, “Okay, Erik, we get it – AI is exploding, history repeats, yada yada – but what does that mean for those of us in the trenches?” Glad you asked. CloudZero lives at the intersection of cloud computing and cost efficiency. And let me tell you, the AI boom is about to send shockwaves through cloud infrastructure like nothing we’ve seen.
Think about it: all these advanced AI models and continuous improvements don’t run on magic. They run on servers – lots of servers, crunching lots of data. Whether it’s training a new model with billions of parameters or deploying AI-powered features to millions of users, it all happens in data centers, probably in the cloud. When AI becomes indispensable, compute demands will skyrocket. We’ll see massive increases in cloud computing usage – and along with that, massive cloud bills. (Those “doubles every 3.4 months” stats aren’t just academic; they translate to real dollars spent on AWS, Azure, GCP, you name it.)
I often joke that nothing drives digital transformation faster than an executive seeing a shocking cloud bill. The efficiency demands are going to go through the roof. Companies will ask: How can we do more with less? How do we optimize our AI/ML workloads so we’re not burning cash for trivial improvements? When AI is critical to your business, cost efficiency isn’t just a nice-to-have — it’s survival. Every inefficiency in your ML pipeline, every wasted GPU-hour, directly hits the bottom line (and the planet, for that matter). The era of “move fast and break things” will meet the reality of “move fast and watch your cloud costs, too.”
This is where I put on my CloudZero hat (figuratively, although we do have actual hats). At CloudZero, our whole mission is to help you understand and control your cloud spend – to make cost an actionable metric for engineering, not just an after-the-fact accounting line. As AI takes off, we see our role as the folks handing out the maps and compasses in a gold rush. Sure, go mine that AI gold, but know where to dig and how not to blow your budget on dynamite. We’re already working with companies building heavy AI/ML systems, helping them track the cost per training, per prediction, per widget – so they know what’s driving spend and can optimize it. It’s not about putting the brakes on innovation; it’s about fueling it more efficiently.
I truly believe the companies that win in the AI-driven world will be those that execute efficiently, not just innovate recklessly. When the “oh, crap” AI inflection hits, those who have tamed their cloud costs will be poised to scale and adapt, while others panic about their margins. (If this sounds like a plug, well, it sorta is – but hey, can you blame a CTO for being passionate about solving the coming problems? 😀)
In conclusion, we’re on the cusp of something huge with AI, just like we were with encryption decades ago. The characters and context differ, but the storyline feels oddly familiar. Bold claims, underestimations, exponential growth, sudden comeuppance – the tech world loves a good cycle. My hope is that by recognizing the pattern, we can navigate this one a bit more gracefully. Let’s learn from the encryption saga: be humble about what we don’t know, plan for a future that’s arriving faster than we think, and invest in the boring stuff (like cost optimization and security) before they become five-alarm fires.
I’ll be here, excited and amused in equal measure. The AI revolution is here – gradually, then suddenly – and I, for one, am stocking up on popcorn 🍿 to watch how it all unfolds. Just don’t be the person betting against exponential curves. We know how that movie ends. Cheers to not repeating history… at least not exactly the same way this time!