This post explains how to tune work_mem setting in PostgreSQL to improve performance of slow queries that sort, join or aggregate large sets of table rows.

First of all, let's look into one of the reasons ...

... why SQL queries in PostgreSQL may be slow even for indexed tables.

Queries against large sets of table rows may need a lot of memory to sort, join, aggregate, or subselect data. PostgreSQL uses the value of work_mem setting as the limit on how much memory each query may use for each sorting operation or hash table. The default value for work_mem is 4MB.

If sorting or hash table needs more memory than permitted by work_mem, then PostgreSQL will use temp files on disk to perform such operations. Since disk IO is much slower than memory IO, such heavy queries may become increasingly slow as the number of rows in the dataset grows, even though the query may use indexes efficiently.

How can I tell if a query is slow because it actually uses temp files?

To find out, just run EXPLAIN ANALYZE for that query against the production database server and have a look at its output (the query execution plan). If you see lines like Sort Method: external merge Disk: <XXXX>kB then it is definitely happening.

Higher work_mem value may make such queries faster if it allows to fit all the temp data for the query into memory.

How can I set work_mem in PostgreSQL?

Use SET command to change work_mem value. Please note that you can set its value:

  • at the database server level (it will take effect for all queries),
  • at the current session level,
  • for individual transactions (to fine tune memory usage for specific queries without affecting the users and connections that perform other queries).

This example shows how to allow a specific SQL query to use up to 256 MB of physical memory to perform sorting and then resets work_mem value for the current session to the current default value.

SET work_mem = '256MB';
SELECT * FROM users ORDER BY LOWER(display_name);
RESET work_mem;

This example sets work_mem for a single transaction and then automatically resets it to the server default value.

SET LOCAL work_mem = '256MB';
SELECT * FROM users ORDER BY LOWER(display_name);

You can learn more about SET command in PostgreSQL documentation.

What is the optimal work_mem value for a query?

It has to be higher than those <XXXX>kB that you see in EXPLAIN ANALYZE output.

As a rule of thumb, you can round up that value to the nearest megabyte. Use it to Set work_mem value. Then rerun EXPLAIN ANALYZE to check if ...external merge Disk... message has disappeared. If it has, then you've found an appropriate value. If not, increase the value by one megabyte and check the EXPLAIN ANALYZE output again. Keep on increasing the value until ...external merge Disk... disappears.

The right value may sometimes be almost double of those <XXXX>kB.

Please remember, that PostgreSQL uses up to work_mem of memory for each sorting operation or hash table for a query. In other words, if a query has four operations like that, PostgreSQL may use four times work_mem of memory. So increasing that setting's value too much may lead to "out of memory" errors on your database server. So use it carefully.