Building the Pipeline: From Registry to Production in One Week
In my last post, I walked through the strategic decisions behind building infrastructure first: choosing Proxmox over bare metal, VMs over LXCs, and K3s over Docker Swarm. With those foundations in place, it’s time to get practical. How do I actually turn three old ThinkCentres into a working deployment pipeline?
This post is about the components I need to build, the one-week timeline I’m committing to, and the discipline required to stay focused on building a product, not just perfecting infrastructure. It’s about making deployment a habit from day one—and proving that you don’t need the cloud to ship real, usable software.

The Essential Components
Three pieces form the backbone of my deployment pipeline: a Docker registry, a GitHub Actions runner, and Tailscale for secure access.
The Docker registry stores my container images locally. Running my own gives me practical experience with the full deployment lifecycle, faster pulls, and no external dependencies. The GitHub Actions runner executes CI/CD pipelines locally, giving me unlimited build time and complete control over the build environment. Both run as separate VMs in Proxmox—clean separation, easy to manage, straightforward to rebuild.
Tailscale solves the exposure problem. Without a static IP, I can still give recruiters secure access through a mesh network. No port forwarding, no security headaches, no dynamic DNS. The free plan supports 3 users and 100 devices—perfect for my needs.
These choices align with the permacomputing values I wrote about in Computing Beyond Obsolescence—keeping control, reducing external dependencies, and building for resilience.
The One-Week Goal
Here’s my commitment: spend one week setting up the infrastructure. By the end, I want:
- A Proxmox cluster running on three ThinkCentres
- VMs for the Docker registry, GitHub Actions runner, and K3s nodes
- A working K3s cluster with basic deployments
- A CI/CD pipeline that builds, pushes, and deploys Docker images
- Tailscale configured for secure external access
Why this order? Because I want to be deploying from day one. Every feature, every change, goes through the pipeline and lands in a live environment. This is a discipline I learned working in top-tier teams and high-demand apps—the kind where deployment isn’t a last-minute scramble, but a daily rhythm. In those environments, shipping to production happens many times a day, woven into the fabric of engineering culture.
Most demo projects stumble because they build in isolation, then rush to deploy at the end. I want deployment to be a habit, not a hurdle—a mark of real product engineering, not just code slinging.
Is one week achievable? Absolutely. The basics are straightforward—Proxmox is well-documented, K3s has excellent getting-started guides, and Docker registries are simple to run. The tricky bits—networking, secrets management, troubleshooting—are where AI-powered guidance will save me time.
But here’s the discipline: good enough is good enough. I’m not chasing high availability, shared storage, or advanced monitoring. Those are valuable, but not necessary for an MVP. Right now, the goal is a working, repeatable deployment pipeline that lets me focus on backend engineering, not infrastructure perfection.
Staying Focused: Infrastructure as a Means, Not an End
There’s a real risk of getting absorbed in infrastructure work and forgetting the actual goal. I’ve seen engineers spend months optimizing their Kubernetes setup, tweaking their monitoring stack, perfecting their CI/CD pipeline… and never actually build the application they set out to create.
The real win is having CI/CD and deployment in place from the beginning. Every feature I build gets shipped, tested, and operated in a real environment. This keeps me honest, keeps the project moving, and ensures I never lose sight of the goal: a working system, not just a pile of code. A product is only real if it’s deployed and can be used—otherwise, it’s just potential.
Infrastructure is a means to an end. The end is PermaTechHub—a working backend system that demonstrates my engineering skills and aligns with my values. The infrastructure just needs to be solid enough to support that work.
So here’s the commitment: one week for setup, then move on. Document what I’ve learned, capture the patterns and decisions, and get back to writing code. If I hit a roadblock, I’ll ask for help. If something doesn’t work perfectly, I’ll note it and move on. Progress over perfection—and deployment over demo-only code.
What This Infrastructure Teaches Me
Even before I write a single line of application code, this infrastructure work teaches valuable lessons about production engineering:
- Constraints drive clarity. Knowing I’m deploying to VMs on K3s shapes how I think about statelessness, configuration, and deployment.
- Automation is essential. A deployment pipeline isn’t optional—it’s the foundation of reliable delivery.
- Security matters from day one. Using Tailscale, managing secrets properly, and following least privilege aren’t afterthoughts—they’re baked into the design.
- Simplicity scales better than complexity. Starting with a modular monolith and keeping orchestration lightweight means I can iterate quickly.
These aren’t just DevOps lessons—they’re backend engineering fundamentals. They’re exactly what hiring managers want to see: not just “can you write code,” but “can you ship code, operate it, and think through the full lifecycle?”
Why On-Premise Still Matters
There’s a perception that on-premise infrastructure is outdated. I disagree.
The principles of good infrastructure—modularity, repeatability, observability, resilience—are the same whether you’re deploying to a home lab or AWS. If you can build and operate a system on-premise, you can translate those skills to the cloud. The reverse isn’t always true.
And there’s a quiet movement back toward simplicity. Modular monoliths are making a comeback as companies realize microservices come with costs—operational complexity, debugging nightmares, runaway cloud bills. For many projects, a well-architected monolith on a few servers is smarter.
I’m not anti-cloud. I’m pro-simplicity, pro-sustainability, and pro-learning. On-premise infrastructure gives me the best platform to demonstrate those values.
What’s Next
Over the next week, I’ll be setting up Proxmox, configuring VMs, installing K3s, and building out the deployment pipeline. I’ll document the process, capture the decisions, and share what I learn—both successes and stumbles.
Once the infrastructure is solid, I’ll move on to building the backend for PermaTechHub—with confidence, knowing I’ve got a deployment platform that’s reliable, repeatable, and aligned with my goals.
Because that’s what good engineering is about: not just writing code, but building systems that work—systems you can operate, evolve, and trust.
But before I dive into features, there’s one last step: creating a basic repository with all the scaffolding to test the pipeline end-to-end. Wiring up the repo, pushing a trivial change, validating that everything flows from commit to deployment. It’s a small step that turns infrastructure from theory into practice—and gives me the confidence to start building for real.