Posts

Showing posts from June, 2017

PostgreSQL on ZFS with BPF tracing on top.

At OmniTI we love solaris, my personal favourite features are ZFS and DTrace. Unfortunately not many run postgres on solaris so i have decided to implement similar features in linux. Instead of Dtrace i'll install BPF, in-kernel bytecode that can be used for tracing introduced in recent kernels (4.X).  This post will be a part of a three series post. In this post we'll start with setup, in part #2 with ZFS and how to use it for backups / snapshots. In part #3 we'll dig into BPF a bit more. Step 1 is to setup a new ubuntu. I setup a VM using  ubuntu-16.04.2-server-amd64.iso. As root : Add the repo for bcc : > echo "deb [trusted=yes] https://repo.iovisor.org/apt/xenial xenial-nightly main" | sudo tee /etc/apt/sources.list.d/iovisor.list sudo apt-get update Install all necessary and some optional packages : > apt-get install -y sudo wget apt-transport-https joe less build-essential libreadline-dev \ zlib1g-dev flex bison libxml2-dev libxslt-dev l...

An unusual upgrade

I have mentioned in previous posts that in my 4 years with OmniTI , we've tackled a lot of migrations. Most of them are usually the "typical" procedure. The methodology we use is more or less explained here . Last week we had a usecase for a kind of "unusual" upgrade, a 9.2 compiled with  "--disable-integer-datetimes" meaning that all datetimes were represented as floating point internally, something that was the default at up to 8.3. This changed at (i think) 8.4 where datetimes were represented as int64 which offers more precision.  The requirement was to migrate the database to a new one that will use integer datetimes with the minimum possible downtime. Obviously a direct upgrade wouldn't work and pg_dump / restore was not an option so we decided to approach and test this scenario differently. The general idea is the following : Upgrade to a 9.6 that was compiled with "--disable-integer-datetimes" and then using something like p...

Tip for faster wal replay on a slave

I've been in situations where i need a slave db to replay a lot of wal files fast, and by a lot i mean tens of thousands. This could happen because of a reporting database refreshing or simply because a slave was down for an extended period of time. It's known that lowering shared_buffers speeds up wal replay for obvious reasons, but by how much ? I did a benchmark on an old server and the results are interesting : With 32GB of shared buffers and with 6390Mb of wals (1840 wal files) it took 1408 seconds to complete the replay. With 64MB of shared buffers and with 6510Mb of wals (1920 wal files) it took 1132 seconds to complete the replay. My test was done by stopping the slave, inserting 50 mil rows to a test table, wait for the wal transfer to complete, then stop the master and start the slave and watch OmniPITR logs. The performance gain in wal replay was about 20% in postgres 10beta1 which doesn't sound bad, especially in times of need. Thanks for...