2 min read

WebSockets

When we first needed real-time updates in our app, we did what most teams do — we polled. Every two seconds, fire a request, check if anything changed, move on. It worked. Mostly. But it felt wrong, and at scale it was going to be a disaster.

That's what pushed us toward WebSockets.

What You're Actually Getting

HTTP is a one-way street per request. You ask, the server answers, the connection closes. WebSockets flip that model: you open one connection and both sides can talk whenever they want. The server doesn't wait to be asked.

Technically it starts as an HTTP request — the client sends an Upgrade header, the server agrees, and from that point on you're in full-duplex TCP territory. The overhead per message drops to almost nothing compared to HTTP headers on every poll.

Key properties:

• Full-duplex — server and client both push, no waiting

• Persistent connection — no handshake cost per message

• Low latency — no polling delay, updates arrive immediately

• Stateful — the connection carries context across the session

Polling Is a Hack

I don't mean that as a criticism — polling solves real problems and it's simple to reason about. But it's fundamentally a workaround for not having a push channel. You're paying for N requests to get 1 update. At low traffic that's fine. At any meaningful scale you're burning resources on empty responses.

The other thing polling can't do well is latency. If you poll every 2 seconds, your average update lag is 1 second. Tighten the interval and you compound the cost. WebSockets just... tell you when something happens.

Where It Actually Makes Sense

Real-time chat is the obvious one. But we used it for collaborative document editing — multiple users seeing each other's cursors and keystrokes without everyone constantly hammering the API. Live dashboards, notifications, multiplayer games, trading UIs. Anything where the data model changes faster than a reasonable poll interval.

Where it doesn't make sense: reading a blog post, submitting a form, fetching search results. Request-response is the right model for those. WebSockets have overhead (connection management, reconnection logic, server state) that you don't want unless you need the push channel.

The Part Nobody Warns You About

Reconnection. WebSocket connections drop — network blips, server restarts, mobile switching between WiFi and cell. You need logic to detect disconnection and reconnect with backoff. Then you need to reconcile state: what did you miss while you were offline?

In production we also ran into load balancer issues. WebSockets require sticky sessions or a shared pub/sub layer (we used Redis) because the connection is stateful and tied to a specific server instance. Horizontal scaling isn't automatic like it is with HTTP.

Libraries like Socket.IO handle a lot of this, but they also add abstraction you'll eventually want to see through when something breaks at 2am.

Worth It?

For the right use case, absolutely. Once we switched from polling to WebSockets for our real-time features, the server load dropped noticeably and the UX got genuinely better — updates felt instant instead of lagged.

But go in knowing it's more infrastructure than a REST endpoint. You're managing connections, not requests. If your team isn't ready for that operational complexity, Server-Sent Events (SSE) might get you 80% of the way there with a fraction of the overhead — it's one-directional, but for a lot of use cases that's all you need.