pg_stat_statements is a built-in PostgreSQL extension that lets you keep an eye on what queries are actually happening in your database and how they're behaving. Basically, it's like a quiet but observant helper that logs every move: which SQL queries were run, how long they took, how often they were executed, and how much they loaded the system.
Why do you need this? First, to find problematic queries. Sometimes your database slows down not because of one villain, but because of a bunch of heavy queries that run way too often. Second, the stats help you see which queries are eating up your resources—CPU, memory, disk. Plus, you can check if your indexes are working the way you planned: maybe somewhere they're not used at all, or maybe you need more of them.
pg_stat_statements lets you stop guessing and see real numbers—so you can make decisions and optimize based on facts.
How do you find slow queries?
Now it gets interesting! Using the pg_stat_statements table, we can look for queries that take a long time or put a heavy load on the server.
The main idea:
Each row in the pg_stat_statements table shows stats for one query. Queries are grouped by their text (the query field), and for each one, you get these metrics:
total_time— total execution time for the query, in milliseconds.calls— how many times the query was run.mean_time— average execution time (total_time / calls).rows— number of rows the query returned.
Simple analysis example
Let's try to find the slowest queries by average execution time:
SELECT
query,
mean_time,
calls,
rows
FROM
pg_stat_statements
ORDER BY
mean_time DESC
LIMIT 5;
This query will show you the TOP-5 queries that take the longest to run. Pay attention to the mean_time field: if you see values over 500-1000 milliseconds, that's a sign those queries need some love.
Example of slow query analysis
Let's break down an example:
Here's what you might get from the previous query:
| query | mean_time | calls | rows |
|---|---|---|---|
| SELECT * FROM orders WHERE status = 'new'; | 1234.56 | 10 | 10000 |
| SELECT * FROM products | 755.12 | 5000 | 100 |
| SELECT * FROM customers WHERE id = $1 | 543.21 | 1000 | 1 |
What do we see?
Query to the orders table: runs pretty rarely (just 10 calls), but each time it pulls a massive 10,000 rows. The table is probably huge, and the query isn't using indexes.
Query to the products table: gets called thousands of times, maybe in a loop in your app. Each fetch only returns 100 rows, but because it's so frequent, this query can be a problem too.
Query to the customers table: runs fast (543 ms), but gets called way too often.
Optimizing slow queries
Now that we've found the problematic queries, let's check out their execution plan with EXPLAIN ANALYZE. For example, for the query to the orders table:
EXPLAIN ANALYZE
SELECT * FROM orders WHERE status = 'new';
What can we see?
Seq Scan: if the query is doing a sequential scan, you should add an index:
CREATE INDEX idx_orders_status ON orders (status);
Filtering issues: if the query is pulling too many rows, rethink the query itself. Maybe you need more conditions or to limit the results:
SELECT * FROM orders WHERE status = 'new' LIMIT 100;
Showing execution time stats
Sometimes problematic queries aren't so obvious. For example, queries that often call functions or subqueries. In those cases, it's handy to look at the total_time column:
SELECT
query,
total_time,
calls,
mean_time
FROM
pg_stat_statements
ORDER BY
total_time DESC
LIMIT 10;
This query will show you the most "expensive" queries by total execution time.
Optimizing indexing
Slow queries are often caused by missing indexes. Use pg_stat_statements to figure out which queries aren't using indexes. If you see a lot of queries with the same filters (like on the status field), but they're super slow, add the right index:
CREATE INDEX idx_orders_status ON orders (status);
After that, check the query performance again with EXPLAIN ANALYZE.
By using pg_stat_statements, you can effectively keep tabs on query performance, find bottlenecks, and make your database run smoother. Remember, the sooner you start analyzing your queries, the easier it'll be to optimize your whole system.
GO TO FULL VERSION