I can't resist the opportunity to comment on the FSF's guidelines - apparently just published - for how to choose a license for your work. A story on this was posted on Slashdot this morning. The FSF's guidelines are a little more nuanced than they could be; for example, they recommend contributing code to existing projects under the licenses used by those projects, rather than starting a giant war with the maintainers of those projects. And if you're including trivial test programs in your work, or your work is shorter than the text of the GPL itself, you might just want to put the work in the public domain. Very sensible!
But that's about as far as they're willing to go. For example, the section on contributing to existing projects suggests that, if you're making major enhancements to a non-GPL program, you might want to fork it and release your changes under the GPL. In other words, in the FSF's opinion, sometimes you should start a giant war with the maintainers. In the open source world, forks are a part of life, and people are free to choose the licenses they want, but this seems awfully cavalier to me. Forks are really damaging. Maintaining a large and unified developer community has value; having those people spend their time developing, rather than arguing about licensing, is a good thing.
Friday, May 27, 2011
Thursday, May 26, 2011
Performance Optimization
There have been several lively discussions this week on possible performance optimizations for PostgreSQL 9.2. PostgreSQL 9.1 is still in beta, and there's still plenty of bug-fixing going on there, but the number of people who need to and can be involved in that process is not as large as it was a month or two ago. So it's a good time to start thinking about development for PostgreSQL 9.2. Not a lot of code is being written yet, which is probably just right for where we are in the development cycle, but we're kicking the tires of various ideas and trying to figure out what projects are worth spending time on. Here are a few that I'm excited about right now.
Monday, May 23, 2011
PostgreSQL 9.2 Development: CommitFest Managers Needed
There's an old joke that project managers don't know how to do anything; they only know how to do it better. Of course, the project manager is the butt of the joke: how can you know how to do something better, if you don't know how to do it in the first place?
Wednesday, May 11, 2011
PostgreSQL 9.1 In Review
As the PostgreSQL 9.1 development cycle winds down to a close (we're now in beta), I find myself reflecting on what we got done during this development cycle, and what we didn't get done. Just over a year ago, I published a blog post entitled Big Ideas, basically asking for feedback on what we should attempt to tackle in PostgreSQL 9.1. The feedback was overwhelming, and led to a follow-on blog post summarizing the most-requested features. Here's the short version: Of the 10 or 11 most requested features, we implemented (drum roll, please) one. Credit for granular collation support goes to Peter Eisentraut, with much additional hackery by Tom Lane.
You can find a second list of possible 9.1 features in the minutes of the PGCon 2010 Developer Meeting. Of the 30 features listed there, we got, by my count, 12.5, or more than 40%. While even that may seem like a somewhat small percentage, a dozen or so major enhancements to the development process is nothing to sneeze at. And certainly, if you were a feature longing to be implemented, you'd much prefer to be on the second list than the first.
You can find a second list of possible 9.1 features in the minutes of the PGCon 2010 Developer Meeting. Of the 30 features listed there, we got, by my count, 12.5, or more than 40%. While even that may seem like a somewhat small percentage, a dozen or so major enhancements to the development process is nothing to sneeze at. And certainly, if you were a feature longing to be implemented, you'd much prefer to be on the second list than the first.
Thursday, May 05, 2011
shared_buffers on 32-bit systems
Every once in a while, I read something that's so obvious after the fact that I wonder why I didn't think of it myself. A recent thread on pgsql-performance had that effect. The PostgreSQL documentation gives some rough guidelines for tuning shared_buffers, which recommends (on UNIX-like systems) 25% of system memory up to a maximum of 8GB. There are similar guidelines on the PostgreSQL wiki page, in the article on Tuning Your PostgreSQL Server.
What became obvious to me in reading the thread - and probably should have been obvious to me all along - is that this advice only makes sense if you're running a 64-bit build of PostgreSQL, because, on a 32-bit build, each process is limited to 4GB of address space, of which (at least on Linux) 1GB is reserved for the kernel. That means that no matter how much physical memory the machine has, each PostgreSQL backend will be able to address at most 3GB of data. That includes (most significantly) shared_buffers, but also the process executable text, stack, and heap, including backend-local memory allocations for sorting and hashing, various internal caches that each backend process uses, local buffers for accessing temporary relations, and so on. And 3GB is really a theoretical upper limit, if everything is packed optimally into the available address space: I'm not sure how close you can get to that limit in practice before things start breaking.
So, if you're running a 32-bit PostgreSQL on UNIX-like operating system, you probably need to limit shared_buffers to at most 2GB or 2.5GB. If that is less than about 25% of your system memory, you should seriously consider an upgrade to a 64-bit PostgreSQL.
What became obvious to me in reading the thread - and probably should have been obvious to me all along - is that this advice only makes sense if you're running a 64-bit build of PostgreSQL, because, on a 32-bit build, each process is limited to 4GB of address space, of which (at least on Linux) 1GB is reserved for the kernel. That means that no matter how much physical memory the machine has, each PostgreSQL backend will be able to address at most 3GB of data. That includes (most significantly) shared_buffers, but also the process executable text, stack, and heap, including backend-local memory allocations for sorting and hashing, various internal caches that each backend process uses, local buffers for accessing temporary relations, and so on. And 3GB is really a theoretical upper limit, if everything is packed optimally into the available address space: I'm not sure how close you can get to that limit in practice before things start breaking.
So, if you're running a 32-bit PostgreSQL on UNIX-like operating system, you probably need to limit shared_buffers to at most 2GB or 2.5GB. If that is less than about 25% of your system memory, you should seriously consider an upgrade to a 64-bit PostgreSQL.