Why Infrastructure Comes Before Code: Building for Real Products, Not Just Demos
In a previous post, I walked through translating user stories into functional requirements. With those foundations in place, you’d think the next step would be to start coding. But you can’t build a house on quicksand. Before I write a single line of business logic for PermaTechHub, I need to know where and how that code will run.
This is the second time I’m choosing foundation over features. A few weeks ago, I made architectural decisions about language, patterns, and internal structure. That post was about how the code would be organized. This one is about where it will actually run. Both are acts of preparation—but they solve different problems. Architecture defines your abstractions. Infrastructure defines your reality.
This is the first of two posts about the infrastructure decisions I’m making for PermaTechHub. Today, I’m focusing on the philosophy and strategic choices—why I’m building infrastructure first, why I’m choosing on-premise over cloud, and the high-level decisions that will shape everything that follows. In the next post, I’ll dive into the practical details of setting up the deployment pipeline.

Why Infrastructure First?
It’s tempting to start coding immediately. I’ve got my functional requirements documented and a clear sense of what the MVP needs to do. Why not just start building features?
The answer is simple: I want real CI/CD from day one, deploying a working system as I go—not just hacking away in isolation. Most demo projects skip this step, leaving deployment as an afterthought, and end up with code that never runs outside a dev laptop. By setting up the infra first, I ensure every feature gets shipped, tested, and operated in a real environment.
Here’s a deeper truth: coding is just a means to an end. A product isn’t a product if it’s not deployed and usable. This approach means I can deploy from day one, making every line of code count toward something real.
Besides, where your code runs shapes how you write it. When I chose to use Kafka for event-driven communication, I knew I’d need infrastructure to support that. If I don’t know whether I’m deploying to a VM, a Kubernetes cluster, or a managed cloud service, I can’t make informed decisions about statelessness, configuration, or resilience. By making infrastructure decisions now, I set clear constraints that guide my development work. Clarity up front prevents chaos later.
The Cloud Question: Why Not AWS?
Let’s address the obvious: why am I not deploying to AWS, Azure, or Google Cloud?
Here’s a confession: I’ve been an AWS and Kubernetes user, yet I still have little idea of how it really works under the hood. The abstractions are powerful, but they hide so much. That’s why I’m building on-premise—to better understand what happens beneath the surface and become a better engineer.
When people ask why I’m avoiding the cloud, the answer comes down to three things: cost, control, and learning.
Cost is obvious. Cloud bills spiral quickly when you’re experimenting. With hardware I already own, my only costs are electricity and time.
Control matters too. I want to understand the full stack—from bare metal to containers to orchestration. How do load balancers actually work? What does it take to run a Docker registry? These aren’t academic questions—they’re practical knowledge that makes you a better engineer.
The cloud’s learning curve is also steeper than you’d think. IAM policies, VPCs, security groups, billing surprises—it’s a lot to wrangle when you’re trying to focus on backend engineering. A local setup lets me control complexity and iterate quickly.
My transition goals are clear: I’m aiming for a backend engineer position with companies that value stability over constant change. I’m not chasing a DevOps role. What I need to demonstrate is that I can build, deploy, and operate a backend system—that I understand the principles, not just cloud-specific implementations.
And here’s the thing: the fundamentals are the same whether you’re deploying to AWS or a home lab. You still need to think about service boundaries, state management, configuration, logging, and resilience. The tools differ, but the engineering discipline is identical.
The Hardware and Virtualization Strategy
I’ve got three old ThinkCentres M900—each with 4 cores and 16GB of RAM. Not bleeding-edge, but more than enough for a demo project. There’s satisfaction in giving these machines a second life—leaning into the permacomputing values I wrote about in Computing Beyond Obsolescence.
When I first documented my architectural decisions, I planned to install Debian directly on each machine and call it a day. But as I started thinking through the actual deployment workflow, I realized that bare-metal has a fatal flaw: inflexibility. Mess up a configuration? Full reinstall. Want to experiment with different setups? Do it all manually, again and again. Need to test a change without breaking your working system? Good luck.
This is one of those moments where early planning reveals better choices. I wasn’t locked into bare-metal yet—I was still in the design phase. So I pivoted.
That’s where Proxmox comes in. This open source virtualization platform turns physical servers into clusters of virtual machines. I can spin up VMs for each service, restore from snapshots when things break, and clone VMs to experiment without fear. It’s a safety net that lets me move fast—exactly what you need during a learning phase. When I inevitably misconfigure something (and I will), I can roll back instead of starting over. When I want to test a new approach, I can clone a VM and experiment in parallel.
Proxmox also enables Infrastructure as Code. I can use Ansible or Terraform to automate VM creation and deployment—building the kind of repeatable, auditable infrastructure that production systems require. This isn’t just convenience; it’s practicing the discipline of treating infrastructure as software.
I’m choosing VMs over LXCs. While LXCs are lighter, Docker inside LXCs is tricky. Since my deployment strategy revolves around Docker images, I need a setup that supports Docker cleanly. VMs provide full OS isolation, support Docker natively, and mirror production systems. With 48GB of total RAM, I’m not resource-constrained.
This shift from bare-metal to Proxmox is a small decision, but it illustrates something important: architecture evolves through conversation with reality. My initial plan was simpler, but it wouldn’t have survived contact with actual experimentation. By catching this early—before I’d installed anything—I saved myself weeks of frustration. This is why infrastructure planning matters. It’s not about getting everything perfect; it’s about spotting the traps before you fall into them.
Orchestration: Starting Simple with Docker Compose
How do I orchestrate containers? When I first laid out my architectural vision, I was excited about K3s—a lightweight Kubernetes distribution that would give me hands-on experience with the orchestration tool that actually matters in production environments.
But here’s another lesson in knowing when to defer complexity: K3s is proving too time-consuming for what I need right now. Yes, Kubernetes is the industry standard. Yes, it’s what most backend teams use. But my primary goal is to build and deploy a working backend system—not to become a Kubernetes expert. The orchestration layer is a means to an end, not the end itself.
So I’m starting with Docker Compose. It’s simple, well-understood, and gets me deploying immediately. No service meshes, no pod configurations, no YAML debugging sessions that eat entire afternoons. Just containers, networking, and volumes—the essentials I need to run my modular monolith in production.
This doesn’t mean I’m abandoning K3s forever. It’s still on the roadmap as a nice-to-have evolution, something I want to explore once the core application is stable and deployed. But right now, shipping beats learning. Docker Compose gives me enough orchestration to deploy with confidence—health checks, automatic restarts, service discovery—without the operational overhead that would distract me from building the actual product.
This is a pragmatic choice that reflects what senior engineers know: pick your battles. Not every decision needs to be future-proof from day one. Sometimes the best architecture is the one that lets you move forward today, knowing you can evolve it tomorrow. This connects directly to my earlier decision to use Prometheus and Grafana for observability—Docker Compose still supports running these monitoring tools alongside my application, just without the Kubernetes abstractions.
What These Choices Mean
With these foundational decisions—Proxmox for virtualization, VMs for isolation, Docker Compose for orchestration—I’ve set the stage for everything that follows. I know the constraints I’m working within, the tools I’ll use, and the patterns I’ll practice.
These infrastructure choices mirror the architectural decisions I made weeks ago. Back then, I chose modular monolith over microservices and internal APIs for clean boundaries. Now, I’m choosing on-premise over cloud and Docker Compose over Kubernetes. Both are acts of deliberate constraint—trading convenience for understanding, complexity for simplicity, and future possibilities for present progress.
The pattern is consistent: start with what works, evolve when needed. My modular monolith can become microservices later if required. My Docker Compose setup can migrate to K3s when the time is right. But none of that matters if I never ship.
In the next post, I’ll walk through the practical implementation: setting up the Docker registry, configuring the GitHub Actions runner, solving the exposure problem with Tailscale, and building a deployment pipeline that works from day one. I’ll also talk about the discipline required to stay focused and avoid getting lost in infrastructure yak-shaving—a trap I nearly fell into with K3s.
Because that’s what this is all about: building a real product, not just writing code. And to do that, I need infrastructure I can trust—simple enough to operate, flexible enough to evolve, and solid enough to support the work ahead.