Tuesday, 8 October 2013

The cost of Reliability

At 9th of September PostgreSQL 9.3 was released with many exciting new features. This post is the first out of many (?) that will explore these new features and will try to evaluate them. I will start with data checksums, a feature that makes possible for PostgreSQL to checksum data pages and report corruptions.
I always liked how PostgreSQL doesn't cut corners when it comes to reliability, even when it (and it usually does) comes with a cost in performance. but how does this work and how much is this cost ? Thats why I'm writing this post :)

Checksums are set immediately prior to flush out of shared buffers
and checked when pages are read in again.

I used 2 db clusters one with the feature enabled and one without it, same config file and 3 basic operations 1 insert 2 updates, the results :

No checksums:

monkey_no_sum=# insert into test_of_sum (id) select generate_series (1,10000000);
INSERT 0 10000000
Time: 26226.654 ms
monkey_no_sum=# update test_of_sum set name = 'The Monkey';
UPDATE 10000000
Time: 51903.185 ms
monkey_no_sum=# update test_of_sum set name = 'The Monkey', address = 'somewhere in London';
UPDATE 10000000
Time: 70125.720 ms


With checksums:

monkey_sum=# insert into test_of_sum (id) select generate_series (1,10000000);
INSERT 0 10000000
Time: 23724.373 ms
monkey_sum=# update test_of_sum set name = 'The Monkey';
UPDATE 10000000
Time: 74446.178 ms
monkey_sum=# update test_of_sum set name = 'The Monkey', address = 'somewhere in London';
UPDATE 10000000
Time: 78987.092 ms




Apart from the performance impact, where obviously there is an extra overhead when you checksum all data pages, there are other things that make this great feature not easy to use. First of all, its a cluster setting. Which means if you want it on your second db you cannot have it in the same data dir. Also you cannot disable it once the cluster is created.

Don't get me wrong, I LOVE having this ! I think its absolutely brilliant that i have the option to use it when i can. but that's the problem, i can't :(
If this feature was object wide (tables / indexes) then it would have application in production systems , being up to the hand of the dba where to enable it and where not to. Even if it was database wide, then i would enable it on a secondary database that i run on the same server which i dont care about performance. Or even turning this on and off would be very useful.

There are good reasons why this feature doesn't work the way i would like, its because the whole thing happens in shared buffers and shared buffers are a cluster entity. I'm pretty positive though that this is just the beginning and that we will soon see a switch on-off operation.
Until then enjoy having this on small dbs or when reliability REALLY matters.



-- Thank for reading






4 comments:

  1. Nice write up!

    Can you throw a third test in? 9.3 database no-checksum running on ZFS file system? ZFS would give you the safety correct? If so it would be nice to see its performance impact compared to your other two tests.

    ReplyDelete
    Replies
    1. That's actually a great suggestion, you will see the results posted shortly.

      Delete
  2. Why was the initial load faster with checksumming enabled? Did you more than one run to verify these numbers? Did you checkpoint between each operation? Was there anything else doing IOs on the system?

    ReplyDelete
  3. The tests run 5 times on each database with (of course) checkpoints between all statements and restarts of the databases. The disk was also isolated of other IO operations. The output you see here is a formatted output just to keep it short and simple. In all 5 runs the insert was made in about the same time (more or less) with the sum disabled. The updates had the big difference. I plan on reading the source and getting a better idea of how this works but from the short description that developers provided and the results that i got, i assume that the sum of a page that changes produce the biggest overhead.

    ReplyDelete