Category: Rants

The Luddite Stack: or How to Outlast the AI Ice Age

Tech monopolies have a playbook: subsidize a costly service, kill off competition, then lock the world into an overpriced, bloated mess. They did this to our digital infrastructure, after that our e-commerce platforms, then they followed up with our social platforms and social infrastructure, and now they’re trying to extend that to everything else with AI and machine learning, particularly with LLMs.

It’s a predatory land grab. The costs of training and running these models are astronomical, yet somehow, AI services are being handed out for almost nothing. Who pays? Governments, taxpayers, cheap overseas labor, and an environment being strip-mined for energy. The end goal is simple: kill competition, make AI dependence inevitable, then jack up the prices when there’s nowhere else to go.

Even so-called “open” AI alternatives like DeepSeek or even the OSI-sanctioned ones, often touted as a step toward democratizing LLMs, still require vast computational resources, specialized hardware, and immense data stores. Billions of money is going to be sunk into efforts to make “AI” more accessible, but in reality, they still rely on the same unsustainable infrastructure that only well-funded entities can afford. We can pretend to compete, but nothing about that will address scale of compute, energy, and data hoarding required ensures that only the tech giants can afford to play.

And the worst part is? This is going to set us back in terms of actual technological progress. Since we’ve abandoned the scientific method and decided to focus on hype, or what will make a few people a lot of money, rather what’s in all of our interests, we will enter an AI Ice Age of technology. Investment that could go into alternatives to AI that outperform it in function and cost, albeit a bit harder to monetize for the hyperscalers.

By alternatives here I don’t just mean code and tech, I also mean humans, experts in their domains that will be forced out of their jobs to be replaced by expensive guessing token dispensers. Take journalists, copyeditors, and fact checkers to start, and extrapolate that to every other job they will try and replace next.

But sometimes, it is tech that we need to maintain. A worrying trend is the proliferation of AI coding assistants. While reviews I’ve seen are mixed, the most generous praise I’ve seen by developers I respect was “it might be good for repetitive parts.” But it’s not like LLMs were such a revolution here.

Before LLMs, we had code templates, IDEs and frameworks like Rails, Django, and React—all improving developer efficiency without introducing AI’s unpredictability. Instead of refining tools and frameworks that make coding smarter and cleaner, we’re now outsourcing logic to models that produce hard-to-debug, unreliable code. It’s regression masquerading as progress.

Another example is something I’ve spoken about in a previous blogpost, about the Semantic Web. The internet wasn’t supposed to be this dumb. The Semantic Web promised a structured, meaning-driven network of linked data—an intelligent web where information was inherently machine-readable. But instead of building on that foundation, we are scrapping it in favor of brute-force AI models that generate mountains of meaningless, black-box text.

What are we to do then? If I were a smart person with a lot of money (I am zero of those things), I would be investing into what I call the Luddite stack, which is these sets of technologies and humans that I refer to earlier that do a much better job at a fraction of the actual cost. LLMs are unpredictable, inefficient, and prone to giving wrong outputs, and are insanely costly, and it shouldn’t be difficult to compete with them on the long term.

Meanwhile, deterministic computing offers precision, stability, and efficiency. Well-written algorithms, optimized software, and proven engineering principles outperform AI in almost every practical application. And for everything else, we need expert human expertise, understanding, creativity and innovation. We don’t need AI to guess at solutions when properly designed systems can just get it right.

The AI Ice age will eventually thaw, and it’s important that we survive it. The unsustainable costs will catch up with it. When the subsidies dry up and the electricity bills skyrocket, the industry will downsize, leaving behind a vacuum. The winners won’t be the ones clinging to the tail of the hype cycle, they’ll be the ones who never bought into it in the first place. The Luddite Stack isn’t a rebellion; it’s the contingency plan for the post-AI world.

Hopefully it will only be a metaphorical ice age at that, and we will still have a planet then. Hit me up if you have ideas on how to build up the Luddite stack with reasonable, deterministic, and human-centered solutions.

I Was Wrong About the Open Source Bubble

This is a follow up to my previous post where I discussed some factors indicating an imbalance in the open source ecosystem titled, Is the Open Source Bubble about to Burst? I was very happy to see some of the engagement with the blog post, even if some people seemed like they didn’t read past the title and were offended by characterizing open source as a bubble, or assuming simply because I’m talking about the current state of FOSS, or how some companies use it, that this somehow reflects my position on free software vs. open source.

Now, I wasn’t even the first or only person to suggest an Open Source bubble might exist. The first mention of the concept that I could find was by Simon Phipps, similarly asking “Is the Open Source bubble over?” all the way back in 2010, and I believe it’s an insightful framing for the time that we see culminate in all the pressures I alluded to in my post.

The second mention I could find is from Baldur Bjarnason, who wrote about Open Source Software and compared it to the blogging bubble. It’s a great blog post, and Baldur even wrote a newer article in response to mine talking about “Open Source surplus”, which is a framing I like a lot. I would recommend reading both. I’m very thankful for the thoughtful article.

Last week as well, Elastic announced it’s returning to open source, reversing one of the trends I talked about. Obviously, they didn’t want to admit they were wrong, saying it was the right move at the time. I have some thoughts about that, but I’ll keep them to myself, if that’s the excuse they need to tell themselves to end up open source again, then I won’t look a gift horse in the mouth. Hope more “source-open” projects follow.

Finally, the article was mentioned in my least favorite tech tabloid, The Register. Needless to say, there isn’t and won’t be an open source AI wars, since there won’t be AI to worry about soon. An industry that is losing billions of dollars a year and is heavily energy intensive that it would accelerate our climate doom won’t last. OSI has a decision to make, to either protect the open source definition and their reputation, or risk both.

P.S. I will continue to ignore any AI copium so save us both some time.