Building a real-time dashboard with Server-Sent Events - and why WebSockets were overkill
← Back to Blog
April 2026·Web Engineering·11 min read

Building a real-time dashboard with Server-Sent Events - and why WebSockets were overkill

My team needed a live dashboard showing CI/CD pipeline status across 40 repositories. I reached for WebSockets. Then I stopped, thought about it for five minutes, and chose something simpler.

The Problem

When I built the dev orchestrator for a German enterprise - a system coordinating CI/CD pipelines, code reviews, and deployments across 40+ repositories - I needed a dashboard. Not a pretty one. A functional one. Something that showed every developer on the team, in real time, what was building, what was failing, and what was waiting for review.

The existing approach was Slack notifications. Hundreds of them per day. Build started. Build failed. Build succeeded. PR opened. PR merged. Deployment started. Deployment complete. Nobody read them anymore. The signal-to-noise ratio had collapsed entirely.

I needed a single page where you could glance and know the state of everything. And it had to update live - no refreshing, no polling.

Why I Almost Used WebSockets

My first instinct was WebSockets. Full-duplex communication. Real-time. Battle-tested. Every tutorial about real-time web apps points you there.

But then I asked myself a question that changed the architecture: does the client ever need to send data back through this connection?

The answer was no. The dashboard was read-only. Data flowed in one direction: from the server to the browser. Developers watched the dashboard. They did not interact with it in real time. When they needed to trigger a build or approve a PR, they used different interfaces.

WebSockets are brilliant when you need bidirectional communication - chat apps, collaborative editors, multiplayer games. For a one-way data stream, they are overkill. They come with complexity: connection state management, heartbeats, reconnection logic, proxy configuration headaches, and load balancer issues that will make you question your career choices.

Server-Sent Events: The Underrated Protocol

Server-Sent Events (SSE) is HTTP. Plain, boring HTTP. The browser opens a request, and the server keeps the connection open, pushing events as they happen. That is it.

Here is what makes SSE perfect for dashboards:

  • Automatic reconnection. If the connection drops, the browser reconnects automatically. With WebSockets, you write that logic yourself.
  • Event IDs. The server can tag each event with an ID. When the browser reconnects, it sends the last ID it received. The server can replay missed events. Built into the protocol.
  • Works through proxies. SSE is just HTTP. Every proxy, load balancer, and CDN understands it. No WebSocket upgrade dance, no special configuration.
  • Simple server implementation. You write a loop that sends text. No framing protocol, no opcodes, no masking.

The tradeoff? SSE is one-way only. The client cannot send messages back over the SSE connection. For my use case, that was not a tradeoff - it was a feature.

The Architecture

The backend was a Laravel application that already managed the orchestrator. Adding SSE was surprisingly straightforward:

Event collector. Webhooks from Bitbucket, Jenkins, and Jira fed into a unified event queue. Every build status change, every PR update, every deployment event got normalized into a common format: timestamp, repository, event type, status, metadata.

SSE endpoint. A single endpoint that streamed events. When a client connected, it received the current state snapshot first - all active builds, open PRs, running deployments. Then it received incremental updates as they happened.

Event buffering. Events were buffered in Redis with a 5-minute TTL. When a client reconnected (using the Last-Event-ID header), the server replayed any events they missed. No data loss during brief disconnections.

The frontend was Vue.js with a simple EventSource connection. The entire client-side real-time logic was about 30 lines of JavaScript. Compare that to the WebSocket boilerplate I would have written - connection management, heartbeat intervals, reconnection with exponential backoff, message parsing. SSE gave me all of that for free.

The Gotchas Nobody Warns You About

Connection limits. Browsers limit the number of simultaneous HTTP connections to a single domain. The limit is typically six. If you have multiple SSE connections to the same origin, you will exhaust this limit and block other requests. Solution: one SSE connection that multiplexes all event types. The client filters by event type.

Nginx buffering. By default, Nginx buffers responses. This means your SSE events pile up and arrive in batches instead of streaming. You need specific headers: X-Accel-Buffering: no and proxy settings to disable buffering. I lost two hours to this before finding the right configuration.

PHP and long-running connections. PHP was not designed for long-running connections. A traditional PHP-FPM setup ties up a worker for every connected client. With 30 developers watching the dashboard, that is 30 workers doing mostly nothing. I solved this with a dedicated lightweight process - a small Node.js service that handled SSE connections while the main Laravel app pushed events through Redis pub/sub. The PHP app stayed focused on business logic. The Node service stayed focused on streaming.

Memory leaks in long connections. Any long-lived connection is a memory leak waiting to happen. I learned this the hard way when the SSE server crashed after three days with an out-of-memory error. The fix: periodic connection recycling. Every 30 minutes, the server sends a special event telling the client to reconnect. The client disconnects and reconnects, getting a fresh state snapshot. Clean, predictable, no memory growth.

What the Dashboard Actually Showed

The layout was intentionally boring. Three columns:

Left: Builds. Every active CI pipeline, color-coded. Green for passing, red for failing, yellow for in-progress. Sorted by most recent activity. Click any build to see the log output in real time - also streamed via SSE from the Jenkins webhook data.

Center: Pull Requests. All open PRs across all repositories. Each showed the author, reviewers, approval status, and CI status. A PR with all checks green and two approvals glowed - it was ready to merge. PRs blocking other work were highlighted.

Right: Deployments. Staging and production deployment status for every service. Which version was where. Who deployed it. When. If a deployment was in progress, you saw the rollout percentage updating in real time.

The entire page fit on a TV mounted on the wall next to the team area. No scrolling needed. You looked up, you knew the state of everything.

The Unexpected Benefits

Fewer interruptions. Before the dashboard, developers would walk over to someone's desk: "Hey, is the staging deployment done?" "Did my build pass?" "Can you review my PR?" After the dashboard, they looked at the wall. Interruption frequency dropped noticeably within the first week.

Faster incident response. When a production deployment went red, everyone saw it simultaneously. There was no delay waiting for someone to notice a Slack message. The dashboard turned incident detection from a notification problem into a visibility problem - and visibility problems are easier to solve.

Cultural shift. The dashboard made work visible. Not in a surveillance way - nobody tracked who was committing what. But when a build had been red for two hours, the team naturally gravitated toward fixing it. Visible problems get solved faster than hidden ones.

Onboarding tool. New developers could look at the dashboard and understand the team's workflow within minutes. "That is the build pipeline. That is the review process. That is how deployments work." It was a living diagram of the development process.

SSE vs WebSockets: When to Use What

After building this, I have a simple decision framework:

Use SSE when:

  • Data flows primarily server-to-client
  • You need automatic reconnection and event replay
  • Your infrastructure is standard HTTP (proxies, load balancers, CDNs)
  • You want minimal client-side complexity
  • Events are text-based (SSE does not support binary)

Use WebSockets when:

  • You need bidirectional real-time communication
  • Binary data transfer is required
  • Client actions need sub-100ms server responses
  • You are building chat, collaboration, or gaming features

Most dashboards, notification systems, and live feeds are SSE territory. Most developers reach for WebSockets anyway because that is what the tutorials taught them. I know because I almost did the same thing.

The Code Was Simple. The Thinking Was Not.

The SSE endpoint was maybe 80 lines of code. The Vue component was 30 lines. The Redis pub/sub bridge was 50 lines. Total: about 160 lines for the entire real-time infrastructure.

But I spent two days thinking about the architecture before writing a single line. Which events matter? How do we handle reconnection? What happens when the event source is down? How do we prevent memory leaks? What about connection limits?

The simplicity of the final solution was not accidental. It was the result of eliminating complexity at every decision point. WebSockets? Too complex for one-way data. Polling? Too wasteful and not truly real-time. Long polling? Awkward and has its own edge cases. SSE? Just right.

Sometimes the best engineering decision is choosing the boring technology that fits your actual requirements - not the exciting one that solves problems you do not have.

The best real-time architecture is not the most powerful one. It is the one where the complexity matches the problem.
Igor Gawrys
Igor Gawrys
AI Engineer & IT Consultant · Katowice, Poland