Before You Ask: What You'll Find If You Read ClickNBack
There’s a question I told myself to be ready for when I went back to the job market: “Your project sounds interesting—can I actually read the code?” The answer is yes. github.com/jerosanchez/clicknback is fully public, and everything I’ll mention here is verifiable. But a codebase with 20+ Architecture Decision Records, 350+ unit tests, and a live CI/CD pipeline is a wide surface area to land on cold.
So here’s the tour I’d give you if we were talking in person.

Start with the ADRs, Not the Code
docs/design/adr/ holds 23 Architecture Decision Records (ath the momento of writing). Each one follows the same structure: context, options considered, decision made, consequences—including what was explicitly rejected and why. These were written before the implementation, which is why the code matches them.
Three are worth reading in sequence. ADR-001 explains why a modular monolith beats microservices for a financially complex system in early development—and names the precise trigger condition that would justify extraction later. ADR-013 covers why purchases are ingested optimistically and confirmed by a background job rather than inline—the decision that immediately raises honest questions about wallet state during the pending window. ADR-016 documents the Fan-Out Dispatcher + Per-Item Runner pattern: each pending item gets its own async task with an isolated retry lifecycle, so one slow confirmation never blocks the batch.
ADRs are not post-hoc rationalizations. Written before the code, they’re the closest thing a solo project has to a design review.
The Layered Architecture Is Enforced, Not Suggested
Most portfolios claim “clean architecture.” In ClickNBack, the layers are enforced by explicit rules I’ve committed to and documented, not just a folder structure anyone could break on the next commit.
The stack is api → services → policies → repositories → DB, with no skips permitted in either direction. Route handlers in api.py do one thing: translate HTTP to domain and back. They call a service, catch domain exceptions, and map them to standardized JSON error responses—{ "error": { "code", "message", "details" } }. No business logic lives there. Services orchestrate: they call policies to enforce rules, call repositories to read or write data, and own the transaction boundary via a UnitOfWorkABC they commit once, explicitly. Policies are pure functions with no I/O—they receive data, enforce exactly one rule each, and raise a domain exception on violation. Repositories are behind abstract interfaces (RepositoryABC) so nothing aside from the composition root knows which database it is.
The consequence of this structure is that every layer is independently testable without standing up a real database or HTTP server. Services are tested by injecting mock repositories created with create_autospec(TheABC). Policies are tested with plain inputs and expected exceptions—no mocks, no infrastructure. API tests use FastAPI’s TestClient with dependency_overrides to replace the entire service layer with a controlled mock.
This also means that when a new engineer reads app/purchases/services.py, they see only business logic—no SQL, no HTTP status codes, no JSON shapes. That clarity compounds over time. Codebases that conflate layers are comprehensible at day one and painful at month six.
Then Read the Tests
The test suite is where most portfolio codebases fall apart. The pattern in ClickNBack: unit tests mock every dependency using create_autospec(TheABC)—against the abstract interface, not the concrete implementation. This matters because it catches the failure mode that a lot of supposedly well-tested systems miss: code that calls the right method name but passes the wrong arguments, or ignores a return value it should be transforming.
Services receive injected repositories and a UnitOfWorkABC they commit explicitly. Every write test asserts uow.commit.assert_called_once() on success, and uow.commit.assert_not_called() when an exception fires before the commit boundary. API tests enumerate every domain exception an endpoint can raise—including the catch-all Exception → 500 fallback—in a single parametrized test so nothing gets quietly dropped.
A test that passes when your dependency is broken is worse than no test at all.
The Hard Part Was Financial Correctness
Decimal everywhere, never float. SELECT FOR UPDATE row-level locking on wallet mutations. A three-state balance model—pending, available, paid—that stays consistent across a confirmation, a reversal, and a withdrawal landing in the same window. Idempotency enforced by a UNIQUE constraint on external_id at the database level, not just application logic.
These constraints don’t appear in tutorials because tutorials don’t have to be right. They appear when you sit with the domain long enough to understand what “correct” means when money is involved—and when you’ve spent time alongside backend teams who had to get this right with real consequences, like I did at a cashback startup years ago.
AI Is in the Workflow, Not Just in the Editor
This is the section that people ask about now, and I’d rather answer it directly than leave it ambiguous.
ClickNBack is built with AI tooling—actively and deliberately. But the way it’s used is specific, and the specificity is what matters. There are six prompt files living in .github/prompts/: build-feature, write-tests, review-code, create-module, add-migration, setup-for-prod. Each one is a structured workflow—not a question I type into a chat window, but a repeatable process that ensures every feature goes through the same sequence of steps: validate the spec, implement the layers in order, write the migration, write the tests, run the quality gates.
The review-code prompt is a 40-item checklist: layer violations, error handling correctness, security issues, logging discipline, database migration hygiene, test completeness. I run it against every diff before considering a change done. That’s not AI writing my code—that’s AI holding me accountable to the standards I’ve already defined.
Beyond the prompts, there’s AGENTS.md at the repository root and 10+ guideline documents under docs/guidelines/. These encode the project’s conventions and product context: how modules are structured, when to split a file into a package, how tests are named, what makes an ADR worth writing. They—along with docs/design and docs/specs folders—are the context an AI assistant needs to be genuinely useful rather than generically helpful—and they double as onboarding documentation that a human engineer could follow independently.
The distinction I care about: AI amplifies defined standards. It doesn’t substitute for them. Without the guidelines, prompt files, and architectural constraints, the same AI tools would produce code that works today and surprises you in production. With them, they accelerate the work of building something consistent.
The Pipeline Ships on Every Commit
Every commit that passes all the quality gates—flake8 + isort + black for style, pytest with a 85% coverage hard gate, and bandit at medium/high severity for security—triggers an automatic deployment to a real VPS. Not a staging environment. The same server that serves clicknback.com/docs.
The coverage hard gate is the part worth emphasizing. If a commit drops below 85%, the pipeline fails and nothing ships. That’s not a convention or a reminder—it’s enforced. The security scan is the same: Bandit flags medium and high severity findings automatically, and the build fails until they’re resolved. In most teams I’ve worked on, these gates exist only in theory; someone always finds a reason to bypass them when there’s a deadline. There’s no deadline here, which is why the gates are still in place.
There’s also a make lint pre-commit hook that runs locally before anything reaches the repository. The CI pipeline is a second line of defense, not the first.
What I’d Do Differently
The older modules—auth, users, merchants—are still on the synchronous SQLAlchemy stack. The newer ones (purchases, wallets) use the full async path. The migration is ongoing and deliberately on the backlog, but it means two patterns live side by side in the same codebase right now. That’s worth naming honestly rather than letting someone find it cold and wonder if it was accidental.
That’s the tour. The system is live at clicknback.com/docs—no setup needed, hit the endpoints directly. The code is at github.com/jerosanchez/clicknback. Code review is genuinely welcome.