Thursday, April 07, 2011

One-Way Tuning

I've been noticing that many of the parameters in postgresql.conf only ever need to be adjusted in one direction from the default.

shared_buffers, temp_buffers, work_mem, maintenance_work_mem,  checkpoint_segments, checkpoint_timeout, and checkpoint_completion_target, join_collapse_limit, and from_collapse_limit only ever need to be adjusted upward from the default.

bgwriter_delay, seq_page_cost, and random_page_cost only ever need to be adjusted downward from the default.

Does this mean we should change some of the defaults?

9 comments:

  1. How about a calculator for a semi good configuration. IIRC Greg Smith had a decent one.

    ReplyDelete
  2. and what is wrong with bgwriter?

    ReplyDelete
  3. The defaults are deliberately very conservative. Some years ago I managed to get in a change for a few slightly less conservative settings by having initdb see if they could go higher. But picking good defaults isn't easy.

    ReplyDelete
  4. As someone who's new to Postgresql and recently had to install it (and then change some of the parameters you mention to get decent performance): yes, please. Please.

    For example, the default value for shared_buffers (32MB) is just so overly conservative, it's not funny. Also if the documentation says "[...] a reasonable starting value for shared_buffers is 25% of the memory in your system.", then why isn't *that* the default (instead of 32MB)?

    ReplyDelete
  5. How about shipping example postgresql.conf files? That way, it's not an either-or proposition. They're plain-text and not something like the Windows registry, so it shouldn't require special programming or anything. It's not unheard-of for open-source servers to supply example config files. Just ship several depending on encountered use-cases:

    postgresql.conf (the current default)
    postgresql.conf.workstation_oltp
    postgresql.conf.bigiron_oltp
    postgresql.conf.bigiron_olap

    etc.

    Then users can just use the appropriate one as a jump-off point for how to configure their own, or even let this be selectable from the installer.

    ReplyDelete
  6. what about a conf generator like pgtune?

    ReplyDelete
  7. i agree that parameters should be adjusted from their conservative defaults. Ideally the install should set them according to hardware resources available on the system and whether it's olpt or warehouse (this should be driving work_mem setting).

    ReplyDelete
  8. We can't raise a lot of the memory limits until Linux raises their ridiculously low sysV memory limits. Take it up with the Linux kernel guys.

    A lot of the other system parameters depend on available RAM. And determining available RAM in any automated fashion is an impossible problem.

    An interactive tuning script is really the best we can do.

    ReplyDelete
  9. I figured out how to determine RAM size just fine on the platforms who care about it most. pgtune does that part fine. Does most of what's needed actually. I just discovered it's now in recent Debian/Ubuntu! With all the extra users, now I've got enough of a foothold to start driving forward development again.

    I also have a script that spits out the necessary entries that go into sysctl.conf, so that the server can start with the higher settings. That I'll pull into pgtune as well. One more good rev of that program, and it will solve 90% of the problems here (it's maybe hitting 60% of them in current form).

    ReplyDelete