Friday, June 15, 2012

postgresql.conf: max_locks_per_transaction

PostgreSQL, and the postgresql.conf file, has around 150 parameters which you don't need to care about 98% of the time (as opposed to the 50-or-so settings you do need to care about more often), except that every once in a while you run into a situation which requires you to learn about some obscure parameter.  That is, after all, why it's a changeable setting and not just hard-coded.  max_locks_per_transaction is one setting.

The purpose of max_locks_per_transaction is to determine the size of the virtual locks "table" in memory.  By default, it's set to 64, which means that Postgres is prepared to track up to (64 X number of open transactions) locks.  For example, if you have it set at the default, and you currently have 10 concurrent sessions with transactions open, you can have up to 640 total locks held between all sessions.   The reason to have a limit is to avoid using dedicated shared memory if you don't need more locks than that.

Most of the time for most users, But every once in a while, it's not:
2012-06-11 14:20:05.703 PDT,"processor","breakpad",17155,"[local]",4fd660cd.4303,2,"SELECT",2012-06-11 14:19:09 PDT,86/199551,0,ERROR,53200,"out of shared memory",,"You might need to increase max_locks_per_transaction.",,,,"select j.id, pj.uuid, 1, j.starteddatetime from jobs j right join priority_jobs_2849 pj on j.uuid = pj.uuid",,,""
The above helpful message is from the activity log.  Unfortunately, the error which the client gets is just "out of shared memory", which is not that helpful ("what do you mean 'out of shared memory'?  I have 4GB!"). 

The reason why the database above ran out of locks was that a few sessions were holding up to 1800 locks, most of them RowExclusiveLock. Given that a lock in Postgres is usually a lock on an object (like a table or part of a table) and not on a row, holding 1800 locks in one transaction is somewhat unusual.  Why so many locks?

Well, the database in question has three tables each of which has over a hundred partitions.  One frequent application activity was running an UPDATE against each of these partitioned tables with no partition condition in it, causing the UPDATE to check all partitions of each table.  This resulted in RowExclusiveLocks on each partition and each index on each partition ... 1800 locks in total.  Note that it needs this lock even though only one partition had rows which were actually updated; despite the name, it's a lock on the table or index, not on a specific row necessarily.  This lock prevents certain things, like dropping the table or index while the locking transaction is still running.

So that's one time you probably want to increase max_locks_per_transaction out-of-the-box: if your database has several tables with many partitions, or if you use table inheritance a lot for other purposes.  Especially since increasing it requires a restart.

1 comment:

  1. Thank you josh,
    it's been helping a lot,

    I am changing the size of max_locks_per_transaction
    but it's not changing. after changing the size of kernel.shmmax from /etc/sysctl.d/30-postgresql-shm.conf i am able to change the size of max_locks_per_transaction.

    ReplyDelete