Parallel pg_dump backups
As I told in a previous post, i'm planning to post about the features that 9.3 brought to the game, today i will explore parallel pg_dump, how it works, how can we benefit from it and how does it compare to classic pg_dump. First of all, pg_dump supports parallel dump only if -Fd (directory) is used, this is because directory dump is the only format that supports multiple processes to write data at the same time. Directory format will write one file per relation and a toc.dat file, it is similar to -Fc with the output being compressed and supports parallel and selective restore. The pg_dump switch that implements parallel dump is -j <njobs>, when used, pg_dump will create n connections +1 (the master process). The test: for this test i have created 10 tables with 20million rows each. as a baseline i will use -Fc and then i will use -Fd -j increasing the number of jobs by 1 each time. the disk that was used was a simple 7.2k rpm sata3 disk connected over usb3. >...