Releases

undici v7.18.2: the decompress fix you should ship this week

undici v7.18.2: the decompress fix you should ship this week I’ve watched a single “harmless” gzip response pin CPUs and creep memory until a Node pod face-plants. undici v7.18.2 adds a guardrail in its decompression path by limiting the HTTP Content-Encoding chain to 5 entries, which reduces the blast radius from hostile or broken servers. […]

Jack Pauley January 14, 2026 6 min read
undici 7.18.2 security fix

undici v7.18.2: the decompress fix you should ship this week

I’ve watched a single “harmless” gzip response pin CPUs and creep memory until a Node pod face-plants.

undici v7.18.2 adds a guardrail in its decompression path by limiting the HTTP Content-Encoding chain to 5 entries, which reduces the blast radius from hostile or broken servers.

What actually changed in v7.18.2

This change targets one ugly failure mode: a server sends a response with a ridiculously long Content-Encoding header, and your client keeps stacking decompressors.

v7.18.2 stops that by rejecting responses when the encoding chain exceeds 5, instead of trying to “be helpful” and decode forever.

  • Content-Encoding chain cap (5): undici refuses responses that specify more than 5 encodings (for example: gzip,gzip,gzip,gzip,gzip,gzip).
  • Decompress path hardening: the decompress logic bails out early, which reduces CPU burn and memory churn from pathological responses.

Who should care, and who can probably wait a day

Upgrade fast if your service pulls data from APIs you do not control, scrapes pages, processes webhooks, or talks to partner systems with “creative” infrastructure.

If everything you call sits behind your own gateway and you already strip compression weirdness at the edge, you still should patch, but you can probably canary it instead of panic-shipping it.

I do not trust “known issues: none” on any security-adjacent release. I trust my graphs.

How I’d roll this out without breaking prod

Start by figuring out where undici hides in your fleet, because it shows up as a transitive dependency more often than teams expect.

Then patch, run a quick smoke test that hits a compressed endpoint, and canary before you light up every region.

  • Find it: run npm ls undici in each repo, and also check your built artifacts (container images) if your CI caches node_modules.
  • Upgrade it: pin undici to 7.18.2 (or newer), run npm install undici@7.18.2, commit the lockfile, rebuild, redeploy.
  • Verify it: after deploy, watch for sudden spikes in request failures that mention decoding/decompression, plus the usual CPU and memory charts.

If you can’t upgrade today

So. Here’s the thing.

Sometimes you cannot touch dependencies on a Tuesday because a release train, a freeze, or a transitive pin blocks you. In most cases, you can still reduce risk by stripping or constraining Content-Encoding at your edge proxy for outbound calls, or by refusing compressed responses from untrusted sources.

  • Edge filtering: block suspicious Content-Encoding values (especially long comma-separated chains) before the response hits your Node process.
  • Resource guardrails: set tighter timeouts and memory limits on the workers that perform HTTP fetch and decompression.

Notes, links, and the one thing that needs checking

Read the upstream undici v7.18.2 release notes on GitHub and link the PR in your internal ticket so the next person can audit the change.

The original post mentioned a CVE number, but I can’t confirm that mapping from the upstream release notes, so I’d remove it unless your security team has a separate advisory. There’s probably a clean way to auto-test this in CI with a crafted response, but…