Every product wants “real-time” now—live dashboards, multiplayer collaboration, notifications, in-app chat. Architecturally, that boils down to one question: how does your server push updates to the browser when HTTP is fundamentally request/response and client-initiated?
Modern web stacks give you three main primitives for this: short/long polling, Server-Sent Events (SSE), and WebSockets. They overlap in outcomes, but not in cost, scalability characteristics, or complexity. In this post, we’ll unpack each, look at concrete JavaScript examples, and define selection heuristics you can use in system design interviews and in your production stack.
Conceptual Model: What Are We Solving?
Browsers can’t be called directly by the server; the browser always initiates the connection. “Real-time” is effectively an illusion built from one of these patterns:
- The client keeps asking for updates (polling).
- The client opens a long-lived HTTP response that the server streams into (long polling or SSE).
- The client and server upgrade to a full-duplex socket (WebSocket).
Everything else—webhooks, WebRTC, WebTransport—either builds on or sits adjacent to these patterns. We’ll stay focused on the three you’re most likely to implement in a standard web app.
Short Polling and Long Polling
Short polling
Short polling: the client hits an HTTP endpoint on a timer (e.g., every 5 seconds), the server responds immediately with the latest state or an empty response, and the connection closes.
Pros:
- Dead simple: works with any backend, any infra, any proxy/CDN.
- Fits perfectly into REST/HTTP mental models and tooling.
Cons:
- Many requests with mostly “no-op” responses.
- Wasted CPU, bandwidth, and battery; poor at scale or low-latency UX.
Minimal JS example:
js// Short polling every 5 seconds
async function poll() {
try {
const res = await fetch('/api/notifications');
if (!res.ok) return;
const data = await res.json();
renderNotifications(data);
} catch (err) {
console.error('Polling error', err);
} finally {
setTimeout(poll, 5000); // fixed interval
}
}
poll();
This is acceptable for low-frequency, non-critical updates (e.g., “check every minute if a long-running job is done”).
Long polling
Long polling reduces empty responses: the client sends a request and the server holds it open until new data is available (or a timeout).
- Client sends GET /updates.
- Server waits until an event arrives or a max wait (e.g., 30s).
- Server responds with data, connection closes.
- Client immediately issues another GET /updates.
Pros:
- Better latency than short polling, fewer “empty” responses.
- Still HTTP; fits well behind load balancers, works with cookies, auth headers, etc.
Cons:
- Each event still costs a full HTTP round-trip and TLS handshake (if HTTPS) per cycle.
- Head-of-line situations: if an event occurs while a client is establishing a new connection, latency spikes.
- Harder to scale when you have large numbers of concurrently open requests.
Client example:
jsfunction longPoll() {
fetch('/api/updates', { method: 'GET' })
.then(res => res.json())
.then(data => {
handleUpdate(data);
// Immediately re-open long poll
longPoll();
})
.catch(err => {
console.error('Long poll error', err);
// Backoff before retry
setTimeout(longPoll, 1000);
});
}
longPoll();
js// Express-like pseudo-code
app.get('/api/updates', async (req, res) => {
const userId = req.user.id;
const update = await waitForNextUpdate(userId, { timeoutMs: 30000 });
if (!update) {
return res.status(204).end(); // no content
}
return res.json(update);
});
This works well for “bursty but not ultra-chatty” event streams such as notifications or job status.
Server-Sent Events (SSE)
SSE is a W3C standard that lets servers push a stream of text events over a single long-lived HTTP response using the text/event-stream content type. Unlike long polling, the connection is explicitly streaming, and the browser provides a dedicated EventSource API.
How SSE works
- Client creates new EventSource('/sse').
- Browser opens a persistent HTTP connection and keeps it open.
- Server sends events in a simple line-based format, e.g.:textevent: message data: {"text":"hello"}
- If the connection drops, the browser auto-reconnects and can resume using Last-Event-ID.
Client example:
jsconst es = new EventSource('/sse');
es.addEventListener('open', () => {
console.log('SSE connection opened');
});
es.addEventListener('message', (event) => {
const payload = JSON.parse(event.data);
handleUpdate(payload);
});
es.addEventListener('error', (err) => {
console.error('SSE error', err);
});
Server example (Node.js/Express-style):
jsapp.get('/sse', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const send = (event, data) => {
res.write(`event: ${event}\n`);
res.write(`data: ${JSON.stringify(data)}\n\n`);
};
send('message', { text: 'Initial hello' });
const subscription = subscribeToUpdates((update) => {
send('message', update);
});
req.on('close', () => {
subscription.unsubscribe();
res.end();
});
});
SSE characteristics
Pros:
- Very simple client API and wire format; text-only stream.
- Auto-reconnect and restart semantics handled by the browser.
- One-way server→client push, which matches many UI needs (feeds, notifications, logs, progress).
- Uses regular HTTP/1.1 or HTTP/2, so it plays well with many proxies/CDNs.
Cons:
- Client→server messages still need regular HTTP (e.g., fetch).
- No binary data; text only.
- Certain intermediaries might buffer or terminate long-lived responses if not configured correctly.
SSE is a strong default for “read-heavy, event-stream” use cases: live dashboards, event logs, stock tickers, or analytics overlays.
WebSockets
WebSockets provide a full-duplex, persistent connection between browser and server, typically over ws:// or wss://. After an HTTP handshake, the connection upgrades to a WebSocket, and both parties can send messages independently at any time.
Client example (MDN-style):
js// Create connection
const socket = new WebSocket('wss://example.com/ws');
// Connection opened
socket.addEventListener('open', () => {
socket.send(JSON.stringify({ type: 'hello', userId: '123' }));
});
// Listen for messages
socket.addEventListener('message', (event) => {
const msg = JSON.parse(event.data);
handleMessage(msg);
});
// Optional: handle close/error
socket.addEventListener('close', () => console.log('Socket closed'));
socket.addEventListener('error', (err) => console.error('Socket error', err));
WebSocket characteristics
Pros:
- True bi-directional messaging; low latency for both directions.
- Good fit for chat, multiplayer, collaborative editors, and high-frequency streams.
- Supports text and binary frames.
Cons:
- Requires stateful, connection-aware infrastructure (connection tracking, sharding, sticky sessions or a dedicated gateway).
- Not all HTTP infra handles upgrades gracefully; sometimes you need separate services, ports, or managed WebSocket infrastructure.
- Backpressure and fan-out become your problem: broadcasting to many clients can get complex at scale.
In practice, many teams now push WebSockets behind a managed service (Pusher, Ably, AWS API Gateway WebSockets, etc.) for high-scale scenarios.
When to Use What (With a Table)
Here’s a pragmatic decision matrix to keep in your head for interviews and architecture reviews.
Rules of thumb:
- If you just need “eventual” updates and simplicity, use short polling.
- If you want push-like behavior but must stay pure HTTP and can tolerate some overhead, use long polling.
- If you need one-way streams (feed, logs, market data) and low latency, prefer SSE.
- If you need two-way, low-latency interaction (chat, multiplayer, collaborative editing), use WebSockets.
Mini Case Study: Live Notifications in a SaaS App
Imagine a B2B SaaS dashboard showing:
- Background job progress.
- Real-time notifications (mentions, approvals).
- Occasional live counters (active users on this project).
Possible designs:
- MVP, very simple stackShort polling /notifications every 10–15 seconds.Background job status polled every 5–10 seconds.This is probably enough to ship v1 quickly.
- Slightly more real-time without infra changesSwitch to long polling /notifications/stream.Server holds the request until a notification is available or 30s passes.Job progress is still short-polling (progress data is relatively infrequent).
- High-quality UX with minimal complexityUse SSE on /events.Server pushes notification, job:update, and counter:update events over one stream.Client fans them out in the UI via EventSource.
- Full interaction (chat, presence, cursor sharing)Introduce WebSockets; client opens one socket connection, authenticates, and subscribes to rooms/topics.Use the same channel for chat messages, reactions, presence, and “typing…” indicators.
You can even mix patterns: WebSockets for “hard real-time” collaborative surfaces, SSE or polling for less critical telemetry.
Conclusion
Real-time is no longer exotic; it’s a baseline expectation. The real engineering work is choosing the right real-time primitive given your latency target, infra constraints, and team skill set.
- Short polling is the blunt hammer: simple, universally supported, but inefficient.
- Long polling is a better hammer when you need quicker updates but must stay in traditional HTTP land.
- SSE is an elegant server→client streaming option with excellent ergonomics for read-heavy streams.
- WebSockets unlock truly interactive, bi-directional experiences, at the cost of more complex infrastructure.
For your next design, start from SSE for one-way updates and WebSockets for two-way interaction, and fall back to long/short polling when your infra or environment forces you to stay purely in the request/response world. Once you’ve picked a pattern, invest in robust reconnect strategies, authentication, and backpressure handling—that’s where production real-time systems succeed or fail.



