Category: Rants

FOSS is more than just Licences

Open Knowledge Foundation Germany has just released a new report titled: “From Software to Society: Openness in a Changing World” by Dr. Henriette Litta and Peter Bihr (I was also interviewed for it). The report talks about what openness means in our digital ages, both from the history of openness and evaluates current challenges.

One of the report’s key insights is that “Openness is not neutral”—a point that resonates deeply with me. I often find myself frustrated with limited imaginations of what free and open source software is and should look like and what it should accomplish.

The recent “open source AI” definition debacle has made this painfully clear. Watching the Open Source Initiative contort themselves to legitimize technologies that rely on extractive labor and environmental gluttony at a desperate bid for relevancy shows how hollow these older definitions have become, that even the organisation that claims to defend the open source definition just ignores a key tenet because it’s not “practical” (read: profitable).

Which is why we need a better definition for what makes a technology truly open beyond the issue of licencing or making source code available. Making source code available doesn’t automatically create ethical practices or sustainable communities. A permissive license doesn’t prevent maintainer burnout, toxic communities, or corporate capture of standards.

I’m not proposing we throw it away, I still believe firmly in the four freedoms. But we need a more holistic definition. And there is still potentially some room for improvement in the licencing realm. The OKFN report for example refers to the need for “protective mechanisms such as fair licences and share-back models”.

That said, I have some more thoughts to share on how to evaluate and improve the openness of the FOSS ecosystem more holistically. I’m not about to propose a full definition here, but here are some aspects I think should be considered:

  • Open standards and interoperability. True openness requires genuinely open standards and meaningful interoperability, not just open source licenses. We’ve seen how open protocols and formats can enable entire ecosystems to flourish, especially looking at internet technologies. Market concentration undermines even the most open standards when monopolies can embrace, extend, and extinguish at will.

    To reference this recent research by Clement Perarnaud and Francesca Musiani on QUIC’s standardization, even “open” standards processes can become vehicles for corporate control when dominant players leverage their resources to reshape fundamental Internet architecture. Google’s QUIC development demonstrates how a company can mobilize superior “human resources, technical means, and strategic vision” to effectively capture standards bodies while maintaining the appearance of openness.
  • Fair work practices, not free labor. The maintainer crisis won’t be solved by better licenses but by sustainable funding models, reasonable expectations, and treating the labor that builds our digital commons with dignity. The report emphasizes, we need “targeted investment in innovation for the common good”—which must include investing in the people who maintain our infrastructure.
  • Democratic governance structures. Our critical infrastructure shouldn’t depend on benevolent dictators or corporate whims. We need transparent, accountable governance that serves communities, not shareholders.
  • Worker organization. We’re stronger together than as atomized individual contributors. Other industries have learned this, FOSS developers can too.
  • Inclusive communities. Codes of conduct aren’t just theater; they’re about creating spaces where everyone can contribute without fear or harassment. There is a loud section of developers in FOSS communities that seem to believe that diversity is a zero sum game, but it isn’t. We need more contributors and maintainers, and the only way to grow is to remove the barriers that have historically marginalised diverse communities from participating.

Ultimately, I think we need to build new structures and institutions, ones that understand openness as a holistic practice, not just a licensing strategy or a vehicle to stay up to date with hype technologies. Organizations that speak for workers, not just code, or capital.

This blogpost won’t resonate with everyone, but I’m not writing this to provoke a reaction or argue, so if you find yourself at odds with what I wrote, here is my permission for you to let go and live your day. If it did resonate with you however, I would love to talk more about how we can better build these structures and institutions that can make FOSS more holistically open, through the communities we build, the standards we protect, the labor we organize, and how we treat each other.

Two Visions: Digital Sovereignty Between Reform and Transformation

Last night, I attended an insightful and well-organized Bits & Bäume Policy Lab event at the Weizenbaum Institute for the Networked Society.

Cecilia Rikap delivered an expert breakdown of Big Tech’s dominance and how its control over our digital world extends far beyond mere ownership. She concluded with an inspiring call to resist and circumvent that dominance, emphasizing public procurement as a key lever for change. More details can be found in the report she co-authored here.

I’ve recently shared my reflections on the Eurostack proposal, and while a superficial comparison might put both proposals against each other, that is not fair to either. What I find most valuable in both reports is the vision they offer, one, a European reformist and strategic vision; the other, a global, democratic, and ecological vision. While tensions exist between them, they are not inherently incompatible. I believe that we live in a world with an imagination deficit and I welcome having more visions.

Another similarity between both reports is that their proposed solutions are constrained by the very qualities that make their initial analyses compelling. For the Eurostack report, it’s the pragmatism that limits its transformative potential. For the Reclaiming Digital Sovereignty report, it’s the uncompromising quality that challenges its feasibility.

The discussion at the end of the event tied everything together, with Alexandra Geese, Member of the European Parliament, shedding light on upcoming challenges at the European level—particularly the alarming push to dismantle regulations across the board, including in the digital space.

Adriana Groh, CEO of the Sovereign Tech Agency, emphasized the urgent need to translate policy into action and to protect the open building blocks of our digital world—elements that will serve as the foundation for lasting, cumulative change.

And that, I think, is crucial. We cannot allow our regulations and institutions to be dismantled in the name of some vague, ill-defined notion of innovation. At the same time, we must start turning words into action. I’d love to see elements of both of these proposals come to life.

The Luddite Stack: or How to Outlast the AI Ice Age

Tech monopolies have a playbook: subsidize a costly service, kill off competition, then lock the world into an overpriced, bloated mess. They did this to our digital infrastructure, after that our e-commerce platforms, then they followed up with our social platforms and social infrastructure, and now they’re trying to extend that to everything else with AI and machine learning, particularly with LLMs.

It’s a predatory land grab. The costs of training and running these models are astronomical, yet somehow, AI services are being handed out for almost nothing. Who pays? Governments, taxpayers, cheap overseas labor, and an environment being strip-mined for energy. The end goal is simple: kill competition, make AI dependence inevitable, then jack up the prices when there’s nowhere else to go.

Even so-called “open” AI alternatives like DeepSeek or even the OSI-sanctioned ones, often touted as a step toward democratizing LLMs, still require vast computational resources, specialized hardware, and immense data stores. Billions of money is going to be sunk into efforts to make “AI” more accessible, but in reality, they still rely on the same unsustainable infrastructure that only well-funded entities can afford. We can pretend to compete, but nothing about that will address scale of compute, energy, and data hoarding required ensures that only the tech giants can afford to play.

And the worst part is? This is going to set us back in terms of actual technological progress. Since we’ve abandoned the scientific method and decided to focus on hype, or what will make a few people a lot of money, rather what’s in all of our interests, we will enter an AI Ice Age of technology. Investment that could go into alternatives to AI that outperform it in function and cost, albeit a bit harder to monetize for the hyperscalers.

By alternatives here I don’t just mean code and tech, I also mean humans, experts in their domains that will be forced out of their jobs to be replaced by expensive guessing token dispensers. Take journalists, copyeditors, and fact checkers to start, and extrapolate that to every other job they will try and replace next.

But sometimes, it is tech that we need to maintain. A worrying trend is the proliferation of AI coding assistants. While reviews I’ve seen are mixed, the most generous praise I’ve seen by developers I respect was “it might be good for repetitive parts.” But it’s not like LLMs were such a revolution here.

Before LLMs, we had code templates, IDEs and frameworks like Rails, Django, and React—all improving developer efficiency without introducing AI’s unpredictability. Instead of refining tools and frameworks that make coding smarter and cleaner, we’re now outsourcing logic to models that produce hard-to-debug, unreliable code. It’s regression masquerading as progress.

Another example is something I’ve spoken about in a previous blogpost, about the Semantic Web. The internet wasn’t supposed to be this dumb. The Semantic Web promised a structured, meaning-driven network of linked data—an intelligent web where information was inherently machine-readable. But instead of building on that foundation, we are scrapping it in favor of brute-force AI models that generate mountains of meaningless, black-box text.

What are we to do then? If I were a smart person with a lot of money (I am zero of those things), I would be investing into what I call the Luddite stack, which is these sets of technologies and humans that I refer to earlier that do a much better job at a fraction of the actual cost. LLMs are unpredictable, inefficient, and prone to giving wrong outputs, and are insanely costly, and it shouldn’t be difficult to compete with them on the long term.

Meanwhile, deterministic computing offers precision, stability, and efficiency. Well-written algorithms, optimized software, and proven engineering principles outperform AI in almost every practical application. And for everything else, we need expert human expertise, understanding, creativity and innovation. We don’t need AI to guess at solutions when properly designed systems can just get it right.

The AI Ice age will eventually thaw, and it’s important that we survive it. The unsustainable costs will catch up with it. When the subsidies dry up and the electricity bills skyrocket, the industry will downsize, leaving behind a vacuum. The winners won’t be the ones clinging to the tail of the hype cycle, they’ll be the ones who never bought into it in the first place. The Luddite Stack isn’t a rebellion; it’s the contingency plan for the post-AI world.

Hopefully it will only be a metaphorical ice age at that, and we will still have a planet then. Hit me up if you have ideas on how to build up the Luddite stack with reasonable, deterministic, and human-centered solutions.

I Was Wrong About the Open Source Bubble

This is a follow up to my previous post where I discussed some factors indicating an imbalance in the open source ecosystem titled, Is the Open Source Bubble about to Burst? I was very happy to see some of the engagement with the blog post, even if some people seemed like they didn’t read past the title and were offended by characterizing open source as a bubble, or assuming simply because I’m talking about the current state of FOSS, or how some companies use it, that this somehow reflects my position on free software vs. open source.

Now, I wasn’t even the first or only person to suggest an Open Source bubble might exist. The first mention of the concept that I could find was by Simon Phipps, similarly asking “Is the Open Source bubble over?” all the way back in 2010, and I believe it’s an insightful framing for the time that we see culminate in all the pressures I alluded to in my post.

The second mention I could find is from Baldur Bjarnason, who wrote about Open Source Software and compared it to the blogging bubble. It’s a great blog post, and Baldur even wrote a newer article in response to mine talking about “Open Source surplus”, which is a framing I like a lot. I would recommend reading both. I’m very thankful for the thoughtful article.

Last week as well, Elastic announced it’s returning to open source, reversing one of the trends I talked about. Obviously, they didn’t want to admit they were wrong, saying it was the right move at the time. I have some thoughts about that, but I’ll keep them to myself, if that’s the excuse they need to tell themselves to end up open source again, then I won’t look a gift horse in the mouth. Hope more “source-open” projects follow.

Finally, the article was mentioned in my least favorite tech tabloid, The Register. Needless to say, there isn’t and won’t be an open source AI wars, since there won’t be AI to worry about soon. An industry that is losing billions of dollars a year and is heavily energy intensive that it would accelerate our climate doom won’t last. OSI has a decision to make, to either protect the open source definition and their reputation, or risk both.

P.S. I will continue to ignore any AI copium so save us both some time.