I've been in situations where i need a slave db to replay a lot of wal files fast, and by a lot i mean tens of thousands. This could happen because of a reporting database refreshing or simply because a slave was down for an extended period of time. It's known that lowering shared_buffers speeds up wal replay for obvious reasons, but by how much ?
I did a benchmark on an old server and the results are interesting :
With 32GB of shared buffers and with 6390Mb of wals (1840 wal files)
it took 1408 seconds to complete the replay.
With 64MB of shared buffers and with 6510Mb of wals (1920 wal files)
it took 1132 seconds to complete the replay.
My test was done by stopping the slave, inserting 50 mil rows to a test table, wait for the wal transfer to complete, then stop the master and start the slave and watch OmniPITR logs.
The performance gain in wal replay was about 20% in postgres 10beta1 which doesn't sound bad, especially in times of need.
Thanks for reading
Today i will try to show how strong Postgres 10 is by combining different features in order to create a "distributed" reporting se...
Yesterday, i came across a conversation in postgres slack channel, someone was interested in upgrading his database and i realised that the...
Lately, i wanted to make a new test / lab environment that would be dispensable, fast to deploy and easily customisable. VM's are all...
Postgres provides a lot of information when it comes to statistics. Only problem is that all the statistics are frozen in time the moment y...