
A practical checklist for speeding up Postgres-backed apps: query plans, indexes, caching with Redis, and what to watch for when you run on serverless Postgres like Neon.
Database performance is one of the fastest ways to improve UX. When your app is “slow”, it’s often waiting on the DB.
- Query plans (EXPLAIN), missing indexes, and high-cardinality filters
- Connection usage + pooling (especially on serverless)
- Redis caching for hot reads / derived data
- Safe pagination and avoiding deep OFFSET
- Monitoring: slow queries, p95 latency, cache hit rate
Serverless Postgres is great for elasticity, but you still need to be intentional about connection management and query efficiency. Treat DB work as product work: measure and iterate.
Last month, our user search feature was crawling at 2.5 seconds per query. Users were complaining, and I was losing sleep. The culprit? A single missing index and some lazy query patterns I'd written months ago and forgotten about.
After a weekend of profiling, I got it down to 180ms — a 92% improvement. Then, with caching, hot paths dropped to ~8ms. Here's exactly what I did, with real code and EXPLAIN outputs you can use in your own projects.
1-- Check query performance with EXPLAIN ANALYZE
2EXPLAIN ANALYZE
3SELECT u.id, u.name, COUNT(o.id) as order_count
4FROM users u
5LEFT JOIN orders o ON o.user_id = u.id
6WHERE u.created_at > '2024-01-01'
7GROUP BY u.id, u.name
8ORDER BY order_count DESC
9LIMIT 20;
10
11-- Example output:
12-- Seq Scan on users u (cost=0.00..1234.56 rows=5000)
13-- If you see "Seq Scan" on large tables, you need an index!1-- B-tree index for timestamp filtering (most common)
2CREATE INDEX idx_users_created_at ON users(created_at);
3
4-- Composite index for JOIN + WHERE conditions
5CREATE INDEX idx_orders_user_created ON orders(user_id, created_at);
6
7-- Partial index for active users only (saves space)
8CREATE INDEX idx_active_users ON users(id) WHERE status = 'active';
9
10-- GIN index for full-text search
11CREATE INDEX idx_users_name_gin ON users USING GIN (to_tsvector('english', name));
12
13-- Check index usage
14SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read
15FROM pg_stat_user_indexes
16ORDER BY idx_scan DESC;1import { Pool } from 'pg';
2import { neon, neonConfig } from '@neondatabase/serverless';
3
4// Traditional connection pooling (Node.js servers)
5export const pool = new Pool({
6 connectionString: process.env.DATABASE_URL,
7 max: 20, // Maximum pool size
8 idleTimeoutMillis: 30000,
9 connectionTimeoutMillis: 2000,
10});
11
12// Neon serverless (no persistent connections)
13neonConfig.fetchConnectionCache = true;
14export const sql = neon(process.env.DATABASE_URL!);
15
16// Query with automatic connection management
17export async function getUser(id: string) {
18 // On serverless: Neon handles connection via HTTP
19 const result = await sql`SELECT * FROM users WHERE id = ${id}`;
20 return result[0];
21}
22
23// Monitor connection pool health
24pool.on('error', (err) => {
25 console.error('Unexpected pool error:', err);
26});
27
28pool.on('connect', () => {
29 console.log('New database connection established');
30});1import { Redis } from '@upstash/redis';
2
3const redis = new Redis({
4 url: process.env.UPSTASH_REDIS_REST_URL!,
5 token: process.env.UPSTASH_REDIS_REST_TOKEN!,
6});
7
8// Cache-aside pattern with TTL
9export async function getCachedUser(userId: string) {
10 const cacheKey = `user:${userId}`;
11
12 // Try cache first
13 const cached = await redis.get<User>(cacheKey);
14 if (cached) {
15 console.log('Cache HIT:', cacheKey);
16 return cached;
17 }
18
19 // Cache MISS: fetch from database
20 console.log('Cache MISS:', cacheKey);
21 const user = await sql`SELECT * FROM users WHERE id = ${userId}`;
22
23 // Store in cache with 5-minute TTL
24 await redis.setex(cacheKey, 300, JSON.stringify(user[0]));
25
26 return user[0];
27}
28
29// Invalidate cache on write
30export async function updateUser(userId: string, data: Partial<User>) {
31 await sql`UPDATE users SET ${sql(data)} WHERE id = ${userId}`;
32 await redis.del(`user:${userId}`); // Invalidate cache
33}
34
35// Cache expensive aggregations
36export async function getDashboardStats() {
37 const cacheKey = 'dashboard:stats';
38 const cached = await redis.get(cacheKey);
39 if (cached) return cached;
40
41 const stats = await sql`
42 SELECT
43 COUNT(*) as total_users,
44 COUNT(*) FILTER (WHERE created_at > NOW() - INTERVAL '7 days') as new_users,
45 AVG(order_count) as avg_orders
46 FROM users
47 `;
48
49 await redis.setex(cacheKey, 60, JSON.stringify(stats)); // 1-min TTL
50 return stats;
51}1// ❌ BAD: OFFSET pagination (slow on large datasets)
2export async function getBadPagination(page: number, limit: number) {
3 const offset = page * limit;
4 // Gets slower as offset increases!
5 return await sql`
6 SELECT * FROM posts
7 ORDER BY created_at DESC
8 LIMIT ${limit} OFFSET ${offset}
9 `;
10}
11
12// ✅ GOOD: Cursor-based pagination (constant performance)
13export async function getCursorPagination(cursor?: string, limit: number = 20) {
14 if (cursor) {
15 // Fetch posts after the cursor
16 return await sql`
17 SELECT id, title, created_at
18 FROM posts
19 WHERE created_at < ${cursor}
20 ORDER BY created_at DESC
21 LIMIT ${limit}
22 `;
23 } else {
24 // First page
25 return await sql`
26 SELECT id, title, created_at
27 FROM posts
28 ORDER BY created_at DESC
29 LIMIT ${limit}
30 `;
31 }
32}
33
34// Client usage:
35const firstPage = await getCursorPagination(undefined, 20);
36const lastPost = firstPage[firstPage.length - 1];
37const secondPage = await getCursorPagination(lastPost.created_at, 20);1-- Enable pg_stat_statements extension (if not enabled)
2CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
3
4-- Find slowest queries (P95 latency)
5SELECT
6 round(total_exec_time::numeric, 2) AS total_time_ms,
7 calls,
8 round(mean_exec_time::numeric, 2) AS avg_time_ms,
9 round((stddev_exec_time::numeric), 2) AS stddev_time_ms,
10 query
11FROM pg_stat_statements
12ORDER BY mean_exec_time DESC
13LIMIT 10;
14
15-- Check cache hit ratio (should be > 99%)
16SELECT
17 sum(heap_blks_read) as heap_read,
18 sum(heap_blks_hit) as heap_hit,
19 round(sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)), 3) AS cache_hit_ratio
20FROM pg_statio_user_tables;
21
22-- Find missing indexes
23SELECT
24 schemaname,
25 tablename,
26 seq_scan,
27 seq_tup_read,
28 idx_scan,
29 seq_tup_read / seq_scan AS avg_seq_tup_read
30FROM pg_stat_user_tables
31WHERE seq_scan > 0
32ORDER BY seq_tup_read DESC
33LIMIT 10;
34
35-- Monitor connection count
36SELECT
37 count(*) as total_connections,
38 count(*) FILTER (WHERE state = 'active') as active,
39 count(*) FILTER (WHERE state = 'idle') as idle
40FROM pg_stat_activity;Want a consultant? Click here to schedule a call.
Schedule a call
A comprehensive, battle-tested guide to choosing between MongoDB and PostgreSQL: scalability, performance, transactions, indexes, managed services, and real-world use cases for SaaS and eCommerce. Includes decision frameworks, diagrams, and code examples.

How I approach real-time UX (chat, live updates): WebSockets, auth/session boundaries, backpressure, and how to keep performance predictable in modern Next.js + Node.js systems.

A practical guide to WebMCP: what it is, how to enable it in Chrome, how LLMs communicate with it, best practices, and realistic time savings for product and engineering teams.
Want to see how AI chat can build you automation workflows?
Try AI Dashboard →Wherever you are in the world, let's work together on your next project.
Israel
Prefer to talk directly? Schedule a call and we can discuss your project live.
Schedule a call