The Inventory I Should Have Done Years Ago

The Inventory I Should Have Done Years Ago
🖼️
"Ulysses at the Table of Circe (The Odyssey of Homer)." Line engraving after John Flaxman by James Parker, 1805. Metropolitan Museum of Art, public domain. Flaxman developed this outline style in Italy, inspired by Greek vase painting. Flaxman's outline illustrations were so popular they were "pirated" (Read: copied) across Europe.

A couple of months ago I finally got fed up. Another service cramming AI features into a plan I was already paying for. Another price increase dressed up as an upgrade. Another terms of service update that amounted to "we're doing this whether you like it or not."

I'd been telling myself I'd sort out my infrastructure for years. I started looking at self-hosting guides and YouTube videos to see what the current state of things is.

Most self-hosting guides start with software selection. "Install Nextcloud. Set up Proxmox. Here's a Docker Compose file you should use." The implicit assumption is that you already know what you need to replace and where your data lives.

I thought about it, and I realised I really didn't. I had a rough idea, but at this point I have decades of infrastructure debt. And I didn't want this to be an average migration project. I love technology, I love open source, and I love to experiment. My frustration wasn't just about privacy or cost. It was about how my relationship with my own infrastructure had eroded over the years, one convenience at a time, until I barely recognised what I depended on. I wanted to turn what I know into a living project that actually serves my interests. Not just convenient drop-in replacements for proprietary services, but something I'd want to maintain.

I said in the first post that this isn't a tutorial. It's not going to be a typical self-hosting project either. It should work, ideally, but I'm not afraid to mess around and find out. I'm willing to take measured risks, fail sometimes, make mistakes and learn from them. (Tara from the Future: and I did.)

So before I started, I set myself some values:

  • Autonomy over dependence.
  • Freedom over features.
  • Research over impulse.
  • Iteration over perfection.
  • Security over convenience.
  • Questions over assumptions.

Going by that last value, instead of switching one service and calling it progress, I decided to actually look at the whole picture first. Map every service, every device, every recurring charge, every account. Understand what I actually depend on before trying to replace any of it. I realise this doesn't sound iterative, but I also didn't want to build in the dark. Those values were to be taken on balance of things, not legalistically.

I thought it would take a weekend. I figured I'd find maybe forty services. I was wrong on both counts.

I spent a week observing myself. Every time I opened an app, logged into something, tapped a notification, I wrote it down. What do I reach for first in the morning. What would I notice if it vanished. What stores something I can't get back.

By the end of the week I had a list of about forty services. It felt thorough. Email, cloud storage, password manager, a few streaming services, the blog's shared hosting, recipe tracker, some tools.

At that point I still thought I was a few steps away from picking replacements, installing them, and calling it a day. Then I took a closer look at the password manager.

I exported it to a CSV and opened it in a spreadsheet. 440 entries. Nearly 300 distinct services. Seven different email identities. There were credentials for services that no longer exist, platforms I'd forgotten I'd ever signed up for, services I must have tried once and never touched again. The password manager was the accidental diary of my digital life, and I'd never read it back to myself before. That changed the scope of the project. Migrating and consolidating forty "services" is a few weekends. Three hundred is something else entirely.

Three email accounts formed the backbone of everything. Account recovery, service registration, two-factor authentication delivery, all routing through servers I didn't fully control. I needed to understand the recovery chain in case something went wrong, and I quickly realised I had set up things in a circular chain: Account A recovers via Account B, Account B recovers via Account A. A single simultaneous lockout, unlikely but not impossible, would have been catastrophic. No external recovery path existed. The inventory was helping me notice flaws in my setup I'd taken for granted.

OAuth was another layer I dreaded mapping. "Sign in with Google" is presented as a shortcut. In practice, it's a coupling mechanism. My accounts had accumulated OAuth connections across two different Google accounts, meaning I couldn't even audit the full picture from a single login. The services connected via OAuth didn't know about each other, but they all shared a single point of failure. Platform SSO operates as an infrastructure monopoly. The path of least resistance creates dependency, dependency creates lock-in, and lock-in is leveraged into market position. I literally felt trapped for many years because I never felt I had the energy to unravel this dependency knot.

I also downloaded my bank statements as PDFs and searched through them for recurring charges. Subscriptions I didn't remember signing up for. Price increases I'd never noticed. One service had quietly raised its rate three times over two years, each time a euro or two, never enough to trigger attention but enough to increase the original cost by more than half. The total monthly burn across all digital subscriptions was genuinely startling. Not because any single charge was unreasonable, but because the aggregate had never been visible as a single number. I've done a few subscription cleanups before, but never starting from the raw bank data. It turns out I'd only ever cleaned up the subscriptions I remembered having, or I noticed by chance.

Digital subscriptions are designed to be easy to start and friction-laden to stop. A free tier becomes a paid tier becomes an increased paid tier, each transition smooth enough that resistance feels disproportionate to the increment. The aggregate, never presented as a single number, grows invisibly. I wonder if that's my fault or is it the intended outcome of a business model that depends on inattention. Amazon's internal name for their cancellation process was the "Iliad Flow". The Iliad, in case you didn't know, was Greek poet Homer's finest work, about the ten-year siege of the city of Troy.

🤖
Tara from the Future: I write these posts in advance, so sometimes I'd like to intervene with new developments in these TftF sections.

In September 2025, the EU Data Act's cloud switching provisions took effect: structured export formats, 30-day transfer windows, and switching fees capped at actual cost (to be eliminated entirely eventually). The right to migrate to your own infrastructure is now explicitly protected in EU law. The dependency graph this post describes is the thing the regulation is designed to address.

Then I went through years of order confirmations, which started to reveal the hardware layer. Devices I can't remember where I stored. Phone cases tracing a device history I hadn't consciously tracked. Even returned items told a story: a device that didn't work out, now absent from memory but still present in the dependency graph of accounts I'd created during the return window. This helped create a fuller picture and finally round up the inventory. Now, came the tough realisation.

You can't migrate email until you've set up infrastructure. You can't set up infrastructure until you've chosen an architecture. You can't migrate accounts until email is stable, because every account migration involves updating the registered email address. You can't decommission the old password manager until every credential has been moved and verified in the new one (mainly those pesky but necessary 2FA auth codes.)

I sat with this information for a while. I'd started the inventory thinking I had a list of things to replace. What I actually had was the ingredients for a dependency graph, and it had an order that I had to figure out. Skipping ahead, picking the fun services first, would have meant rebuilding on a foundation I hadn't secured.

Every "simple" migration has upstream dependencies. This mirrors the so-called "software supply chain" arguments in the digital sovereignty discourse, but they just call it that probably because it sounds more complicated and self-important than what it really is, it's just dependencies. You don't become autonomous by switching one service. You become autonomous by understanding your dependency graph and working through it in the right order, making informed choices on what to keep and what to replace, and making sure you have alternatives and contingencies.

You also start borrowing whatever issues your dependencies have, because at some point you're going to have to deal with them. People problems. Licence problems. Security issues. That becomes your business too, to a certain extent, so you better understand them.

With the inventory done, I could start to see the dependency graph. What I couldn't imagine yet was the architecture: where each service runs, how they connect, what the backup strategy is, how do I recover in case I mess things up royally. These are the questions many self-hosting projects I've seen answer by default, or not answer at all. The inventory was the first logical step for me so I can start answering them with agency.


This is part of the Autonomous Stack series, documenting my migration from proprietary services to self-hosted infrastructure.
Previously: Why I'm Doing The Autonomous Stack Series.

Next: Architecture Decisions (stay tuned).