2 min read

The boring architecture that ships

There's a certain kind of conference talk that goes: we had a monolith, we broke it into microservices, we built a service mesh, we have a platform team, here's our architecture diagram. The diagram has thirty boxes and sixty arrows and everyone nods approvingly.

The talk nobody gives: we have a Postgres database, a handful of services, a message queue, and we shipped twelve features last quarter.

Complexity Has a Cost Nobody Accounts For

Every abstraction layer you add is something that can fail, something that needs to be understood by new engineers, something that has to be operated, monitored, and debugged. A service mesh solves real problems at scale. It also adds a non-trivial operational surface area that you're now responsible for.

The teams I've seen move fast consistently have small, comprehensible systems. Not because they were lazy about architecture, because they were deliberate about it. They added complexity when they hit a specific wall that required it, not in anticipation of walls they might hit.

Most Problems Are Data Problems, Not Architecture Problems

When systems get slow or unreliable, the instinct is often to reach for an architectural solution: add a cache, split the service, introduce a queue. Sometimes that's right. More often, the problem is a missing index, an N+1 query, or a database schema that doesn't match the access patterns.

I've seen teams spin up Redis clusters to work around queries that were just missing an index. The cache solved the symptom, added operational overhead, and left the underlying problem in place to cause different issues later. The boring fix, fix the query: would have taken a morning.

Boring Doesn't Mean Staying Still

The argument for boring architecture isn't an argument against change or improvement. It's an argument for the smallest change that solves the actual problem.

Postgres can handle a lot before you need something else. A well-structured monolith can scale further than people expect before service decomposition pays off. A synchronous HTTP API is easier to reason about and debug than an event-driven system until the moment it isn't. These aren't permanent constraints: they're defaults that buy you simplicity until you have a specific reason to trade it away.

The Culture Signal

Architecturally conservative teams tend to have something in common: they're good at saying no to complexity, which means they're good at saying no in general. They push back on gold-plating, on pre-optimisation, on building for requirements that don't exist yet. That skill compounds. Teams that develop it ship more of the right things.

The teams that reach for complex solutions early often do it for legitimate reasons, they're thinking ahead, they want to do things properly. But there's a version of it that's really about finding the interesting technical problem inside the boring product requirement. Boring architecture requires resisting that.

The Bottom Line

The most reliable systems I've worked with were not the most sophisticated ones. They were systems whose behaviour was predictable, whose failure modes were understood, and whose codebases could be navigated by someone new without a two-hour onboarding walkthrough of the data flow.

If you're trying to decide between a simple solution and a clever one, and you can't articulate a specific current problem that the clever solution solves, ship the simple one. You can always add complexity later. You can rarely remove it.