Why performance isn't always the north star
I spend a lot of time thinking about performance. I use Cap'n Proto over Protobuf because parsing overhead matters. We run Aeron where most teams would reach for Kafka because microseconds matter on our hot path. I care about allocations, cache locality, and lock contention in ways that probably seem obsessive to people outside this domain.
So when I say performance isn't always the north star, I want that to land with the weight of someone who genuinely believes in chasing it, not as a dismissal from someone who's never had to.
The Two Questions Worth Asking First
Before optimising anything, I ask: where is the actual bottleneck, and does it matter to the user?
The first question is basic profiling discipline. Most systems have one or two real bottlenecks; the rest is noise. Optimising noise costs engineering time and makes code harder to read, for zero user-visible impact.
The second question is harder. A system that responds in 8ms instead of 12ms is meaningfully faster. A system that responds in 80ms instead of 120ms is not, humans can't perceive that difference in most UI contexts. If you've spent a sprint chasing a 40ms improvement that sits behind a 200ms network round-trip, you've made your code harder to maintain for a metric nobody will ever see.
When Correctness Beats Performance
This one should be obvious but it isn't always. I've seen production systems where a race condition was known but unfixed because fixing it would 'add overhead.' The overhead was a mutex. The race condition cost three hours of incident response every few weeks.
Performance optimisations that compromise correctness aren't optimisations, they're deferred bugs. The correct, slower version is almost always the right starting point. You can optimise a correct system. You can't reliably reason about a fast, broken one.
When Maintainability Beats Performance
The code you write today will be read, debugged, and modified by someone: maybe you in six months: who doesn't have the context you have right now. Highly optimised code is often harder to follow. Sometimes that's unavoidable; sometimes it's a choice.
I've written some genuinely ugly low-level code for hot paths where the ugliness was worth it. But I've also seen teams apply the same mindset to code that's called twice per user session. The result is a codebase that's opaque everywhere and fast nowhere that matters.
The Bottom Line
Performance is a feature. It's one of the most important features in systems that need it. But it has a cost: in development time, code complexity, and the cognitive load it places on everyone who works in that codebase after you.
Measure before you optimise. Optimise where it's visible to users or where it's genuinely bottlenecking the system. And when someone wants to sacrifice correctness or clarity for a performance gain that doesn't show up in anything observable, push back hard.