Skip to content

WebSocket Reference: Native API, Socket.IO, Nginx Config, Kubernetes Ingress & Scaling

WebSockets give you full-duplex persistent connections over a single TCP socket. The upgrade handshake is HTTP; after that it’s its own protocol. Key decision: use the native ws module when you control both ends and want minimal overhead; use Socket.IO when you need rooms, namespaces, auto-reconnect, and fallback to long-polling for browsers that block WebSockets.

1. Native WebSocket (Browser + Node ws)

WebSocket API in the browser, ws server in Node, and connection lifecycle
// Browser client (native WebSocket API):
const ws = new WebSocket("wss://api.example.com/ws");   // wss = TLS

ws.addEventListener("open", () => {
  ws.send(JSON.stringify({ type: "subscribe", channel: "prices" }));
});

ws.addEventListener("message", (event) => {
  const data = JSON.parse(event.data);
  console.log(data);
});

ws.addEventListener("close", (event) => {
  console.log("closed:", event.code, event.reason);
  // Reconnect with exponential backoff:
  setTimeout(() => reconnect(), Math.min(1000 * 2 ** retries++, 30000));
});

ws.addEventListener("error", (err) => {
  console.error("WebSocket error:", err);
  // 'close' event always fires after 'error' — reconnect there
});

// Node.js server (npm install ws):
import { WebSocketServer } from "ws";

const wss = new WebSocketServer({ port: 8080 });

wss.on("connection", (ws, req) => {
  const clientIP = req.socket.remoteAddress;
  console.log("client connected:", clientIP);

  ws.on("message", (data) => {
    const msg = JSON.parse(data.toString());
    // Broadcast to all connected clients:
    wss.clients.forEach((client) => {
      if (client.readyState === WebSocket.OPEN) {
        client.send(JSON.stringify({ ...msg, from: clientIP }));
      }
    });
  });

  ws.on("close", (code, reason) => {
    console.log("disconnected:", code, reason.toString());
  });

  // Heartbeat ping/pong (detect dead connections):
  const interval = setInterval(() => {
    if (ws.readyState === WebSocket.OPEN) ws.ping();
  }, 30000);
  ws.on("pong", () => { ws.isAlive = true; });
  ws.on("close", () => clearInterval(interval));
});

2. Socket.IO (Rooms, Namespaces, Auth)

Socket.IO server/client, rooms, namespaces, and auth middleware
// Server (npm install socket.io):
import { createServer } from "http";
import { Server } from "socket.io";
import express from "express";

const app = express();
const httpServer = createServer(app);
const io = new Server(httpServer, {
  cors: { origin: "https://app.example.com", credentials: true },
});

// Auth middleware (runs before connection):
io.use((socket, next) => {
  const token = socket.handshake.auth.token;
  const user = verifyToken(token);
  if (!user) return next(new Error("Unauthorized"));
  socket.data.user = user;   // attach to socket
  next();
});

io.on("connection", (socket) => {
  const { user } = socket.data;

  // Join a room (user-specific, chat channel, etc.):
  socket.join(`user:${user.id}`);
  socket.join("room:lobby");

  socket.on("send-message", async ({ roomId, text }) => {
    const message = await saveMessage(roomId, user.id, text);
    // Emit to everyone in the room except sender:
    socket.to(`room:${roomId}`).emit("new-message", message);
    // Emit to sender too:
    socket.emit("new-message", message);
  });

  socket.on("disconnect", (reason) => {
    console.log("disconnected:", reason);
    io.to("room:lobby").emit("user-left", { userId: user.id });
  });

  // Send only to one user (across multiple sockets/tabs):
  io.to(`user:${userId}`).emit("notification", data);
});

httpServer.listen(3000);

// Client (npm install socket.io-client):
import { io } from "socket.io-client";

const socket = io("https://api.example.com", {
  auth: { token: localStorage.getItem("token") },
  reconnectionAttempts: 5,
  reconnectionDelay: 1000,
});

socket.on("connect", () => console.log("connected:", socket.id));
socket.on("connect_error", (err) => console.error(err.message));
socket.on("new-message", (msg) => appendToChat(msg));
socket.emit("send-message", { roomId: "general", text: "Hello" });

3. Nginx Proxy Configuration

Nginx config for WebSocket upgrade headers and load balancing sticky sessions
# WebSocket upgrade requires these two headers — missing either breaks it:
map $http_upgrade $connection_upgrade {
    default upgrade;
    ""      close;
}

server {
    listen 443 ssl;
    server_name api.example.com;

    location /ws {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;                          # required for WebSocket
        proxy_set_header Upgrade $http_upgrade;          # forward the Upgrade header
        proxy_set_header Connection $connection_upgrade; # "upgrade" or "close"
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        proxy_read_timeout 3600s;    # keep connection alive (default 60s kills WS!)
        proxy_send_timeout 3600s;
    }

    location / {
        proxy_pass http://localhost:3000;
    }
}

# Load balancing with sticky sessions (ip_hash — same client → same upstream):
upstream websocket_servers {
    ip_hash;                          # sticky: same IP always hits same server
    server 127.0.0.1:8080;
    server 127.0.0.1:8081;
}

# Or use least_conn for better distribution with Redis adapter for shared rooms:
upstream websocket_servers {
    least_conn;
    server 127.0.0.1:8080;
    server 127.0.0.1:8081;
}

4. Kubernetes Ingress for WebSockets

NGINX Ingress Controller annotations for WebSocket support
# WebSocket via NGINX Ingress Controller (nginx.ingress.kubernetes.io annotations):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ws-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"  # sticky
spec:
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /ws
            pathType: Prefix
            backend:
              service:
                name: websocket-service
                port:
                  number: 8080

# WebSocket + Socket.IO path (Socket.IO uses /socket.io/ for polling fallback):
  annotations:
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/upstream-hash-by: "$http_x_forwarded_for"

# Socket.IO needs long-polling support too — the /socket.io/ prefix handles both:
paths:
  - path: /socket.io
    pathType: Prefix
    ...
  - path: /
    pathType: Prefix
    ...

# For multi-replica WebSocket servers, use Redis adapter:
# npm install @socket.io/redis-adapter
import { createAdapter } from "@socket.io/redis-adapter";
import { createClient } from "redis";
const pubClient = createClient({ url: "redis://redis:6379" });
const subClient = pubClient.duplicate();
await Promise.all([pubClient.connect(), subClient.connect()]);
io.adapter(createAdapter(pubClient, subClient));
// Now io.to("room:X").emit() works across multiple pods

5. Production Patterns (Backpressure, Message Queuing, Scaling)

Handle slow clients, queue messages for offline users, and horizontal scaling
// Backpressure: don't send to client if buffer is full:
function safeSend(ws, data) {
  if (ws.readyState !== WebSocket.OPEN) return;
  if (ws.bufferedAmount > 65536) {       // 64KB threshold
    console.warn("client too slow, dropping message");
    return;
  }
  ws.send(data);
}

// Message queue for offline users (store, deliver on reconnect):
io.on("connection", async (socket) => {
  const { user } = socket.data;
  socket.join(`user:${user.id}`);

  // Deliver queued messages from Redis:
  const queued = await redis.lrange(`queue:${user.id}`, 0, -1);
  for (const msg of queued) {
    socket.emit("queued-message", JSON.parse(msg));
  }
  await redis.del(`queue:${user.id}`);
});

// When user is offline, queue instead of sending:
async function sendToUser(userId, event, data) {
  const sockets = await io.in(`user:${userId}`).fetchSockets();
  if (sockets.length > 0) {
    io.to(`user:${userId}`).emit(event, data);
  } else {
    await redis.rpush(`queue:${userId}`, JSON.stringify({ event, data }));
  }
}

// Structured message protocol (type + payload — avoid untyped events):
// shared/types.ts:
type WSMessage =
  | { type: "subscribe"; channel: string }
  | { type: "message"; roomId: string; text: string; timestamp: number }
  | { type: "error"; code: string; message: string };

// Rate limiting connections:
import rateLimit from "express-rate-limit";
app.use("/socket.io", rateLimit({ windowMs: 60000, max: 100 }));

Track Node.js and Socket.IO releases at ReleaseRun. Related: Node.js Reference | Express.js Reference | Redis Reference | NGINX Ingress Reference

🔍 Free tool: npm Package Health Checker — check ws, socket.io, and other WebSocket packages for known CVEs and active maintenance.

Founded

2023 in London, UK

Contact

hello@releaserun.com