Architecture First, Features Later: Why I Built My Foundation Before Writing Code
🚨 Long Post Alert.
This is longer than my usual fare. Grab a coffee, settle in, and let’s dive deep.
After my experiment with Spring Boot and AI, I felt ready to take the next step with PermaTechHub, my marketplace for second-hand technology. But before jumping into features or functional requirements, I made a deliberate choice: define the foundational architecture first. This might seem backward—shouldn’t you know what you’re building before deciding how to build it? But here’s the thing: the “how” shapes the “what” more than we often admit. And if you’re building something meant to last, clarity about your architectural principles saves you from painful rewrites later.
This post is about that process—the conversations, the doubts, the alternatives I weighed, and the moments where I had to choose between competing goods. I documented everything in Architecture Decision Records (ADRs), but this isn’t just a dry recitation of technical choices. It’s the story of how I thought through each decision, what kept me up at night, and why I ultimately chose the path I did.

Why Architecture Before Requirements?
It’s tempting to dive straight into user stories, wireframes, and feature lists. But I knew that for a project meant to last years—not months—I needed a technical foundation I could trust. The MVP features I’d outlined gave me a sense of direction, but they didn’t answer the questions that would shape everything else: What language and framework? Monolith or microservices? How will modules communicate? How will I deploy and monitor this thing?
By making these decisions early and documenting them, I could set clear constraints that would guide every subsequent choice, avoid complexity traps by understanding trade-offs up front, and create a roadmap for evolution—from MVP to something more mature. In short, I wanted to build a foundation, not a house of cards held together by duct tape and wishful thinking.
The Core Architectural Decisions
Over the course of this planning session, I created nine foundational ADRs. Each one represents a deliberate choice, with alternatives considered and rationale documented. Here’s the journey.
Java: Boring, Proven, and Built to Last
The first decision was the most fundamental: what language? I’d already written about why I’m choosing Java, but now I had to commit. Python, Go, and Elixir were all tempting. Python for its flexibility, Go for its simplicity, Elixir for its concurrency model and my previous experience. But each had trade-offs that didn’t sit right with me. Python’s dynamic nature worried me for long-term maintainability. Go’s ecosystem felt too immature for the kind of domain-driven design I wanted to explore. Elixir was brilliant, but niche—and I wanted something with broader reach.
Java, backed by Spring Boot, gave me stability, strong typing, and a culture of craftsmanship I wanted to learn from. It’s not flashy, but it’s the language that powers the world’s critical systems. That felt right for a project meant to endure.
Modular Monolith: The Best of Both Worlds?
Next came the big question: monolith or microservices? Microservices are everywhere, but so is the complexity they bring—distributed transactions, eventual consistency, service discovery. For a solo project, that felt like signing up for distraction, not learning.
Instead, I chose a modular monolith: a single deployable application, but with clear internal boundaries. It’s the middle path—modularity without distribution. I get separation of concerns, encapsulation, and the ability to extract modules into services later if needed. But for now, I keep things simple: one deployment, one codebase, fewer moving parts.
The risk? Without discipline, a monolith can turn into a mess. But that’s where the next decision came in.
Internal APIs: Simulating Microservices Inside a Monolith
To make the modular monolith work, I needed a rule: modules can’t call each other’s methods directly. Instead, they communicate through well-defined internal APIs—simulating the boundaries you’d have in a distributed system. Now, I could have used simple interfaces for decoupling and testability, and that would work. But I wanted something stronger: real API contracts, as if the modules were separate services. It’s more boilerplate, but it enforces boundaries in a way that’s harder to cheat on.
The trade-off is clear: more work up front, but cleaner evolution later. And honestly? It felt like the kind of discipline I wanted to practice. If I can’t maintain boundaries in a monolith, how would I ever manage them across services?
PostgreSQL and Table Prefixes: Data Boundaries Without Schemas
For the database, PostgreSQL was an easy choice—mature, open source, and widely adopted. But how to maintain modular boundaries at the data layer? I considered using separate schemas for each module, but that felt like more complexity than I needed right now.
Instead, I went with a simple convention: three-letter table prefixes (e.g., usr_, lst_, msg_) to signal ownership. Modules manage their own tables, and referential integrity happens at the application level, not via foreign keys. This keeps them loosely coupled and makes future extraction feasible, without adding operational overhead today.
It’s a small decision, but it sets the tone: clarity through convention, not through heavy tooling.
JWT and In-House Auth: Learning by Doing
For authentication, I wrestled with whether to build my own or use an external provider like Auth0. Building it myself means more work—user registration, login, password management, token issuance. But it also means hands-on experience with real-world auth patterns, and the freedom to evolve it as I learn.
I chose JWT (despite its quirks) because it’s the industry standard and works seamlessly with Spring Security. Yes, Paseto is arguably more secure, but its ecosystem is less mature. I wanted to focus on learning the principles, not fighting tooling.
The trade-off? JWTs can’t be revoked without extra infrastructure (like a token blacklist). But for an MVP, short-lived tokens and careful validation are enough. I can add complexity later if I need it.
REST and gRPC: Two APIs, One System
For the public API, REST was the obvious choice—universally understood, easy to document, and well-supported. But I also wanted to explore gRPC for internal communication. Specifically, I decided that my Moderation module would use gRPC to issue commands to the Users and Listings modules (like “ban this user” or “remove this listing”).
Why bother? Because it’s a natural fit for internal, strongly-typed APIs, and it lets me learn both paradigms without over-complicating the public interface. The trade-off is learning two API styles instead of one, but that’s exactly the kind of breadth I’m after.
Kafka: Event-Driven From the Start
Some processes—like notifications and payments—are naturally asynchronous. These aren’t in my initial MVP, but they’re high-priority features I plan to add as soon as possible. If time allows, I might even include them before calling the MVP complete. And that’s exactly why I decided to include an ADR on async communication now: I wanted to set the pattern early, so when I do add these features, the architecture is ready.
I could have used synchronous REST calls and called it a day, but that felt fragile. What happens when a notification service is down? Does the whole transaction fail? Instead, I chose an event-driven architecture using Kafka. Modules emit domain events (like UserRegistered or PaymentReceived), and other modules react independently. It’s more complex than direct calls, but it decouples processes and prepares the system for future growth.
I chose Kafka over RabbitMQ because Kafka is built for event streaming and large-scale systems. Yes, it’s overkill for an MVP—but I’ll run it in a simple, single-node setup, and the learning value is worth the extra complexity.
Home Lab and K3s: Real DevOps, Real Hardware
For deployment, I made a series of decisions that reflect my commitment to hands-on learning. I’ll containerize everything with Docker, use Docker Compose for local development, and then migrate to K3s (a lightweight Kubernetes distribution) running on a home lab cluster—three old ThinkCentre M900s with Debian installed.
Why not just use the cloud? Because I want to learn real infrastructure management—networking, orchestration, hardware—not just how to click buttons in a web console. Plus, it aligns with my permacomputing values: I’m reusing old hardware, keeping things local, and building something sustainable.
Now, DevOps isn’t the core of this project—I’m not trying to become a full-time infrastructure engineer. But here’s the thing: with AI-assisted development, I expect to liberate enough time to tackle challenges I wouldn’t have attempted before. AI can help me move faster on the code side, which means I can afford to spend time learning the operational side without getting stuck. It’s a bet that modern tooling—both AI and lightweight platforms like K3s—make this feasible for a solo developer.
The trade-off? More manual setup, more things to maintain. But I’ll also have a self-hosted Docker Registry, a self-hosted GitHub Actions runner, and full control over my deployment pipeline. It’s more work, but it’s the kind of work that teaches you things the cloud abstracts away.
Prometheus and Grafana: Observability From Day One
Finally, I committed to observability and monitoring from the start. Structured logging (JSON format), Spring Boot Actuator for metrics, Prometheus to scrape them, and Grafana for dashboards and alerts. Even for an MVP, I wanted real visibility into what’s happening—uptime, request counts, error rates.
The alternative was to skip all of this and move faster. But observability isn’t something you bolt on later—it’s part of the foundation. And running Prometheus and Grafana on my home lab cluster means I’m practicing the full stack, not just the happy path.
What This All Adds Up To
Looking back at these nine decisions, a pattern emerges: in almost every case, I chose the harder, cleaner path. Internal APIs instead of direct method calls. Event-driven architecture instead of synchronous calls. A home lab instead of the cloud. Observability from day one instead of bolting it on later.
These choices add up to more work up front. But they also add up to something else: a foundation I can trust, and a project that teaches me things I wouldn’t learn by taking shortcuts. And what is a sabbatical for, if not to take the time to learn properly? When I start writing code, I won’t be second-guessing my architecture. I’ll be executing against a plan I believe in.
What Comes Next
This planning process took several hours of thinking, researching, and writing. But it gave me something invaluable: clarity and direction. With all nine ADRs documented (they’re in the project’s docs/ folder, if you’re curious), I now have a roadmap—not just for the MVP, but for how the project can evolve over time.
I know there will be more architectural decisions to make as the project grows—architecture is never truly “done.” But it’s crucial to set down a solid set of foundational ADRs early, so every new choice builds on something stable instead of shifting sand.
Next up: functional requirements, data modeling, and API design. The architecture will guide every choice, and when I do start coding, I’ll have a clear path from MVP to production. This is how you build something meant to last—not by rushing to features, but by taking the time to get the foundation right. And honestly? It feels good to know where I’m going, even if the journey is just beginning.