Releases

PostgreSQL Performance in 2026: The Short, Useful Version

PostgreSQL Performance in 2026: The Short, Useful Version Performance guide. JSONB, full-text search, and tuning. No magic. Mostly indexing and less guessing. Start here: the checklist I’ve watched teams “optimize Postgres” for weeks. Then they add one index. Latency drops. Anyway. Use jsonb by default: Use json only for exact formatting or key order. Index […]

February 16, 2026 6 min read

PostgreSQL Performance in 2026: The Short, Useful Version

Performance guide. JSONB, full-text search, and tuning. No magic. Mostly indexing and less guessing.

Start here: the checklist

I’ve watched teams “optimize Postgres” for weeks. Then they add one index. Latency drops. Anyway.

  • Use jsonb by default: Use json only for exact formatting or key order.
  • Index JSONB with GIN: Use jsonb_path_ops if you only run @> queries.
  • Index hot JSON paths: Add a btree expression index on (payload->>’customer_id’).
  • Use generated tsvector: Keep search fields in sync without triggers.
  • Use GIN for FTS: Pick GiST only for heavy-write tables.
  • Run EXPLAIN (ANALYZE, BUFFERS): Fix sequential scans first.
  • Front it with PgBouncer: Stop drowning Postgres in connections.
  • Partition time-series tables: Drop old partitions, do not delete rows.
  • Watch autovacuum: Bloat ruins “fast enough” databases.
  • Test restores: Untested backups fail when you need them.

The rest is nuance. Moving on.

JSONB: how to not shoot yourself

This bit me once. We shipped a JSONB column. Then we queried five keys all day. We paid twice.

  • json vs jsonb: json stores text, jsonb stores parsed binary. jsonb reads faster, supports indexing, but costs more on write (10-15% overhead for parsing).
  • Operators you actually use: -> and ->> extract values. @> checks containment. ? checks key existence.
  • Two GIN modes: jsonb_ops supports more operators. jsonb_path_ops makes smaller indexes (typically 30-40% smaller) for @>.

Here’s the pattern that actually works for high-throughput JSONB queries:

-- Step 1: GIN for containment queries (broad searches)
CREATE INDEX idx_events_payload ON events USING GIN (payload jsonb_path_ops);

-- Step 2: Expression index for the one field you filter on constantly
CREATE INDEX idx_events_customer ON events ((payload->>'customer_id'));

-- Step 3: Partial index if you only care about recent data
CREATE INDEX idx_events_recent ON events ((payload->>'customer_id'))
  WHERE created_at > NOW() - INTERVAL '90 days';

-- The partial index is usually 5-10x smaller. Scans finish faster.

If you filter on one JSON key all day, extract it into an expression index. Do not build a giant GIN and hope.

One trap: GIN indexes are expensive to update on every INSERT. If your write throughput matters, use fastupdate = on (the default) and accept slightly stale index entries. The pending list gets merged during autovacuum.

Full-text search: keep it boring

πŸ”” Never Miss a Breaking Change

Monthly release roundup β€” breaking changes, security patches, and upgrade guides across your stack.

βœ… You're in! Check your inbox for confirmation.

Most search needs stay simple. Title and body search. Basic ranking. Postgres handles that without Elasticsearch.

-- Generated column: auto-maintained, no trigger needed (PG 12+)
ALTER TABLE articles ADD COLUMN search_vector tsvector
  GENERATED ALWAYS AS (
    setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
    setweight(to_tsvector('english', coalesce(body, '')), 'B')
  ) STORED;

-- GIN index on the search vector
CREATE INDEX idx_articles_search ON articles USING GIN (search_vector);

-- Query with ranking
SELECT title, ts_rank(search_vector, query) AS rank
FROM articles, to_tsquery('english', 'kubernetes & upgrade') AS query
WHERE search_vector @@ query
ORDER BY rank DESC LIMIT 20;

For typo tolerance, add pg_trgm:

CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE INDEX idx_articles_title_trgm ON articles USING GIN (title gin_trgm_ops);

-- Now fuzzy matching works:
SELECT title FROM articles WHERE title % 'kuberntes' ORDER BY similarity(title, 'kuberntes') DESC;

This handles 80% of search use cases. If you need facets at massive scale, you already know you need a dedicated search engine.

Query tuning: the only loop that matters

Do not tune blindly. Read the plan.

-- Always use ANALYZE and BUFFERS. Estimated costs lie.
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) 
  SELECT * FROM orders WHERE customer_id = 42 AND status = 'pending';

-- What you're looking for:
--   Seq Scan on orders  (cost=0.00..35420.00 rows=1 width=248)
--     actual time=312.445..312.445 rows=1 loops=1
--     Buffers: shared read=15420
--                            ^^^^^ This is your problem. 15K blocks read.

-- Fix: CREATE INDEX idx_orders_customer_status ON orders (customer_id, status);
-- After: Index Scan, Buffers: shared hit=4. Done.

The three fixes that cover 90% of slow queries:

  1. Missing index: Sequential scan on a large table? Add a composite index on the WHERE clause columns.
  2. Stale statistics: If row estimates are wildly off (estimated: 1, actual: 50,000), run ANALYZE table_name;
  3. Bloated tables: If shared_hit is high but the table is “small”, run SELECT pg_size_pretty(pg_table_size('orders')); and check if it’s 10x what you expect. Bloat from dead tuples.

Ops knobs that change outcomes

Here’s the thing nobody mentions. Your database “feels slow” because ops got messy. These postgresql.conf changes typically give the biggest wins:

# Memory (set these first)
shared_buffers = '4GB'           # 25% of RAM for dedicated DB servers
effective_cache_size = '12GB'    # 75% of RAM (tells planner about OS cache)
work_mem = '64MB'                # Per-sort/hash operation. Start low, increase if sorts spill to disk.
maintenance_work_mem = '1GB'     # For VACUUM, CREATE INDEX. Can be aggressive.

# Autovacuum (the silent killer)
autovacuum_max_workers = 4
autovacuum_naptime = '30s'                  # Check more often
autovacuum_vacuum_scale_factor = 0.05       # Vacuum at 5% dead tuples (default 20% is too late)
autovacuum_analyze_scale_factor = 0.02      # Update stats at 2% change

# WAL (write-ahead log)
wal_buffers = '64MB'
checkpoint_completion_target = 0.9
max_wal_size = '4GB'

PgBouncer: Use transaction pooling for typical web traffic. Session pooling only if you need prepared statements. Set default_pool_size to 2-3x your CPU cores, not 100.

Partitioning: Use RANGE on timestamps for metrics and events. The real win isn’t query speed β€” it’s DROP PARTITION instead of DELETE FROM. Deleting millions of rows causes bloat and autovacuum pressure. Dropping a partition is instant.

What’s new in PostgreSQL 17/18 that affects performance

PostgreSQL 18 (current) and 17 shipped several performance-relevant changes worth knowing:

  • Incremental backup (PG 17): pg_basebackup --incremental ships only changed blocks. Cuts backup time and storage by 60-80% for large databases.
  • Improved JSONB subscripting: PG 17 allows UPDATE orders SET data['status'] = '"shipped"' without full JSONB rewrite. Meaningful for write-heavy JSONB workloads.
  • Parallel VACUUM (PG 17): Large table vacuums now use multiple workers. Tables over 10GB see 2-3x faster vacuum completion.
  • Async I/O improvements (PG 18): Better prefetching for sequential scans. Benchmarks show 10-20% improvement on analytical queries hitting cold data.
  • MERGE enhancements (PG 18): The SQL MERGE command (upsert) handles more complex conditions, reducing the need for custom PL/pgSQL functions.

PostgreSQL on Kubernetes: the short warning label

It works. It also fails in new ways.

  • Storage: Use low-latency block storage (NVMe-backed). Random I/O matters more than throughput. Test with pgbench on your actual storage class.
  • Resources: Set sane requests and limits. Avoid noisy neighbors. A good starting point: requests.memory = 75% of limits.memory to avoid OOM kills during vacuum spikes.
  • Anti-affinity: Split primary and replicas across nodes and availability zones. Use topologySpreadConstraints not just pod anti-affinity.
  • Backups: Test restore. Schedule it. Measure it. If your restore takes longer than your RTO, you don’t have a backup β€” you have a liability.

Operator choice matters: CloudNativePG and CrunchyData PGO are the two production-grade options. Pick one and commit.

Monitoring with pg_stat_statements

You cannot tune what you do not measure. pg_stat_statements is the single most useful extension for finding slow queries in production. Enable it, let it collect for a day, then sort by total_exec_time.

-- Enable pg_stat_statements (add to postgresql.conf, then restart)
-- shared_preload_libraries = 'pg_stat_statements'
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Top 10 queries by total execution time
SELECT
  calls,
  round(total_exec_time::numeric, 2) AS total_ms,
  round(mean_exec_time::numeric, 2) AS avg_ms,
  round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 2) AS pct,
  substr(query, 1, 80) AS query_preview
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

The pct column shows you which queries consume the most cumulative time. Fix the top 3 and you typically cut total database load by 30-50%. For deeper analysis, the PostgreSQL Wiki performance optimization guide covers query plan analysis, buffer cache tuning, and workload profiling in detail.

Practical: a full health check script

Here is a quick diagnostic script you can run against any PostgreSQL instance to spot common performance issues. It checks for bloated tables, missing indexes, and long-running queries — the three things that cause most production slowdowns.

-- Check for tables with high dead tuple ratios (bloat)
SELECT
  schemaname || '.' || relname AS table,
  n_dead_tup,
  n_live_tup,
  round(n_dead_tup::numeric / NULLIF(n_live_tup, 0) * 100, 1) AS dead_pct,
  last_autovacuum
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY dead_pct DESC
LIMIT 10;

-- Find tables missing indexes (sequential scans on large tables)
SELECT
  schemaname || '.' || relname AS table,
  seq_scan,
  idx_scan,
  pg_size_pretty(pg_relation_size(relid)) AS size
FROM pg_stat_user_tables
WHERE seq_scan > 1000 AND idx_scan < 100
  AND pg_relation_size(relid) > 10485760  -- 10MB+
ORDER BY seq_scan DESC;

-- Active long-running queries (over 30 seconds)
SELECT
  pid,
  now() - query_start AS duration,
  state,
  substr(query, 1, 100) AS query
FROM pg_stat_activity
WHERE state != 'idle'
  AND query_start < now() - interval '30 seconds'
ORDER BY duration DESC;

Run this weekly, or better yet, wire it into your monitoring. The PostgreSQL source code and development discussions are tracked in the official PostgreSQL GitHub repository, which is useful for understanding how the query planner and vacuum system evolve between releases.

Related Reading

πŸ› οΈ Interactive Tool

Check PG extension compatibility

Open in new tab β†—

πŸ› οΈ Try These Free Tools

⚠️ K8s Manifest Deprecation Checker

Paste your Kubernetes YAML to detect deprecated APIs before upgrading.

πŸ—ΊοΈ Upgrade Path Planner

Plan your upgrade path with breaking change warnings and step-by-step guidance.

🐘 PostgreSQL Extension Matrix

Check extension compatibility across PostgreSQL versions.

See all free tools β†’