Backups Restores and PITR nowdays
Its been almost a year since 9.4 was released, it included many nice features, but the one that changed my day to day administration was replication slots.
In OmniTI , we use something called OmniPITR a lot, OmniPITR could be called a wal management suite, it can be used to wal ship, to replay wals to clean up wals when they are not needed and it can be used for backups. Usually the way I use it is to first set up streaming replication to a replica, and on top of streaming replication setup wal shipping replication, the reason is simple, i want to have all the wals around in case i lose the slave and i didn't really like the wal_keep_segments approach.
Backups are usually being taken from a slave in order to reduce load from the master.
Briefly, OmniPITR does this by sending a pg_start/stop_backup to the master, puts a pause removal file on the slave so wals are kept and makes 2 tar.gz files for the base backup and the wals.
Usually i have a backup server keeping all the wal files and the backups, which means a second (direct or indirect) wal shipping destination.
All this is nice and it works really well but with replication slots this could be more simple and more efficient.
Recently, I had to setup a backup for a 1 master 2 slave setup on 9.4.
The requirements of the backup would be the following:
Pretty straight forward stuff really.
I setup archiving to the backup server with a simple scp over rsync command,
archive_command = 'rsync -a %p postgres@<Backup server IP>:/data/walarchive/%f'
I created a backup user that will use .pgpass
touch ~/.pgpass ; chmod 0600 ~/.pgpass
added :
<master ip>:5432:template1:backup:pgbackup
<slave1 ip>:5432:template1:backup:pgbackup
<slave2 ip>:5432:template1:backup:pgbackup
<slave-x ip>:5432:template1:backup:pgbackup
Allowed my backup user in pg_hba.conf (in all db servers)
and then i simply used pg_basebackup like this :
rm -rf /data/base/* && pg_basebackup -D /data/base/ -h <slave ip> -U backup -Ft -z -U backup && mv /data/base/base.tar.gz /data/backups/basebackup_`date +"%Y-%m-%d"`.tar.gz
I would like if pg_basebackup could customize the backup name and if it used replication slots (coming in 9.5) but none if it is really a problem when it comes to backups.
I added a recovery.conf that looks like this :
restore_command = '<PATH/TO/pg_standby> -t /data/backups/failover.now -l /data/walarchive %f %p %r'
trigger_file = '/data/backups/failover.now'
recovery_end_command = 'rm /data/backups/failover.now'
recovery_target = 'immediate'
The parameter “recovery_target” specifies that recovery should end as soon as a consistent state is reached, i.e. as early as possible. When restoring from an online backup, this means the point where taking the backup ended.
NOTE that the recovery.conf file will exist if the backup was taken from a slave, always remember to edit it and replace its entries with the ones above.
Some notes would be that before you start the database.
Remove all logs from pg_log and verify that:
archive_command ='/bin/true'
synchronous_standby_names = ''
At this moment all you have to do is to start the database and monitor the log.
The log file of a freshly restored database should look like this :
postgres@backup:/data/backups/pg_log$ tail -f postgresql-2015-11-13_020746.log
2015-11-13 02:07:46 EET [] [2344]: [1-1] user=,db=,e=00000 LOG: database system was interrupted; last known up at 2015-11-13 02:06:20 EET
2015-11-13 02:08:10 EET [] [2344]: [2-1] user=,db=,e=00000 LOG: starting point-in-time recovery to earliest consistent point
2015-11-13 02:08:10 EET [] [2344]: [3-1] user=,db=,e=00000 LOG: restored log file "00000002.history" from archive
2015-11-13 02:08:10 EET [] [2344]: [4-1] user=,db=,e=00000 LOG: restored log file "000000020000000200000054" from archive
2015-11-13 02:08:10 EET [] [2344]: [5-1] user=,db=,e=00000 LOG: redo starts at 2/54000028
2015-11-13 02:08:10 EET [] [2344]: [6-1] user=,db=,e=00000 LOG: consistent recovery state reached at 2/540000F0
2015-11-13 02:08:10 EET [] [2344]: [7-1] user=,db=,e=00000 LOG: recovery stopping after reaching consistency
2015-11-13 02:08:10 EET [] [2344]: [8-1] user=,db=,e=00000 LOG: recovery has paused
2015-11-13 02:08:10 EET [] [2344]: [9-1] user=,db=,e=00000 HINT: Execute pg_xlog_replay_resume() to continue.
2015-11-13 02:08:10 EET [] [2342]: [3-1] user=,db=,e=00000 LOG: database system is ready to accept read only connections
at this point run :
psql -c "select pg_xlog_replay_resume()" template1
you should see in the log file :
2015-11-13 02:10:08 EET [] [2344]: [13-1] user=,db=,e=00000 LOG: archive recovery complete
2015-11-13 02:10:09 EET [] [2344]: [14-1] user=,db=,e=00000 LOG: MultiXact member wraparound protections are now enabled
2015-11-13 02:10:09 EET [] [2342]: [4-1] user=,db=,e=00000 LOG: database system is ready to accept connections
2015-11-13 02:10:09 EET [] [2394]: [1-1] user=,db=,e=00000 LOG: autovacuum launcher started
Now, assuming that you want to perform PITR.
Follow exactly the same restore procedure as previously described but this time the recovery.conf should look like this :
restore_command = '<PATH/TO/pg_standby> -t /data/backups/failover.now -l /data/walarchive %f %p %r'
#recovery_target_time = '2015-11-13 00:09:00'
# or
#recovery_target_xid = '1966'
trigger_file = '/data/backups/failover.now'
#recovery_target_inclusive = 'true'
recovery_end_command = 'rm /data/backups/failover.now'
recovery_target_time: This parameter specifies the time stamp up to which recovery will proceed.
recovery_target_xid: This parameter specifies the transaction ID up to which recovery will proceed. Keep in mind that while transaction IDs are assigned sequentially at transaction start, transactions can complete in a different numeric order. The transactions that will be recovered are those that committed before (and optionally including) the specified one. The precise stopping point is also influenced by recovery_target_inclusive.
recovery_target_inclusive: Specifies whether to stop just after the specified recovery target (true), or just before the recovery target (false). Applies when either recovery_target_time or recovery_target_xid is specified. This setting controls whether transactions having exactly the target commit time or ID, respectively, will be included in the recovery. Default is true.
The rest of this procedure should be identical as previously described.
Automate the restore procedure on the backup server , set some maintenance crontab entries that will delete old backups and WALs and you are have yourself a very simple but efficient backup strategy.
Remember , that testing your backups is equally important with backups!
Thanks for reading
Vasilis
In OmniTI , we use something called OmniPITR a lot, OmniPITR could be called a wal management suite, it can be used to wal ship, to replay wals to clean up wals when they are not needed and it can be used for backups. Usually the way I use it is to first set up streaming replication to a replica, and on top of streaming replication setup wal shipping replication, the reason is simple, i want to have all the wals around in case i lose the slave and i didn't really like the wal_keep_segments approach.
Backups are usually being taken from a slave in order to reduce load from the master.
Briefly, OmniPITR does this by sending a pg_start/stop_backup to the master, puts a pause removal file on the slave so wals are kept and makes 2 tar.gz files for the base backup and the wals.
Usually i have a backup server keeping all the wal files and the backups, which means a second (direct or indirect) wal shipping destination.
All this is nice and it works really well but with replication slots this could be more simple and more efficient.
Recently, I had to setup a backup for a 1 master 2 slave setup on 9.4.
The requirements of the backup would be the following:
- backup will be taken from the slave.
- backup will be incremental allowing PITR
- backup will be stored remotely on another machine.
- minimum possible wal shipping targets.
- replicas should be identical and adding a new replica should be trivial.
- backup should be initiated from the server that keeps backups and not from a slave.
- backups should be tested by restoration every week.
Pretty straight forward stuff really.
I setup archiving to the backup server with a simple scp over rsync command,
archive_command = 'rsync -a %p postgres@<Backup server IP>:/data/walarchive/%f'
I created a backup user that will use .pgpass
touch ~/.pgpass ; chmod 0600 ~/.pgpass
added :
<master ip>:5432:template1:backup:pgbackup
<slave1 ip>:5432:template1:backup:pgbackup
<slave2 ip>:5432:template1:backup:pgbackup
<slave-x ip>:5432:template1:backup:pgbackup
Allowed my backup user in pg_hba.conf (in all db servers)
and then i simply used pg_basebackup like this :
rm -rf /data/base/* && pg_basebackup -D /data/base/ -h <slave ip> -U backup -Ft -z -U backup && mv /data/base/base.tar.gz /data/backups/basebackup_`date +"%Y-%m-%d"`.tar.gz
I would like if pg_basebackup could customize the backup name and if it used replication slots (coming in 9.5) but none if it is really a problem when it comes to backups.
I added a recovery.conf that looks like this :
restore_command = '<PATH/TO/pg_standby> -t /data/backups/failover.now -l /data/walarchive %f %p %r'
trigger_file = '/data/backups/failover.now'
recovery_end_command = 'rm /data/backups/failover.now'
recovery_target = 'immediate'
The parameter “recovery_target” specifies that recovery should end as soon as a consistent state is reached, i.e. as early as possible. When restoring from an online backup, this means the point where taking the backup ended.
NOTE that the recovery.conf file will exist if the backup was taken from a slave, always remember to edit it and replace its entries with the ones above.
Some notes would be that before you start the database.
Remove all logs from pg_log and verify that:
archive_command ='/bin/true'
synchronous_standby_names = ''
At this moment all you have to do is to start the database and monitor the log.
The log file of a freshly restored database should look like this :
postgres@backup:/data/backups/pg_log$ tail -f postgresql-2015-11-13_020746.log
2015-11-13 02:07:46 EET [] [2344]: [1-1] user=,db=,e=00000 LOG: database system was interrupted; last known up at 2015-11-13 02:06:20 EET
2015-11-13 02:08:10 EET [] [2344]: [2-1] user=,db=,e=00000 LOG: starting point-in-time recovery to earliest consistent point
2015-11-13 02:08:10 EET [] [2344]: [3-1] user=,db=,e=00000 LOG: restored log file "00000002.history" from archive
2015-11-13 02:08:10 EET [] [2344]: [4-1] user=,db=,e=00000 LOG: restored log file "000000020000000200000054" from archive
2015-11-13 02:08:10 EET [] [2344]: [5-1] user=,db=,e=00000 LOG: redo starts at 2/54000028
2015-11-13 02:08:10 EET [] [2344]: [6-1] user=,db=,e=00000 LOG: consistent recovery state reached at 2/540000F0
2015-11-13 02:08:10 EET [] [2344]: [7-1] user=,db=,e=00000 LOG: recovery stopping after reaching consistency
2015-11-13 02:08:10 EET [] [2344]: [8-1] user=,db=,e=00000 LOG: recovery has paused
2015-11-13 02:08:10 EET [] [2344]: [9-1] user=,db=,e=00000 HINT: Execute pg_xlog_replay_resume() to continue.
2015-11-13 02:08:10 EET [] [2342]: [3-1] user=,db=,e=00000 LOG: database system is ready to accept read only connections
at this point run :
psql -c "select pg_xlog_replay_resume()" template1
you should see in the log file :
2015-11-13 02:10:08 EET [] [2344]: [13-1] user=,db=,e=00000 LOG: archive recovery complete
2015-11-13 02:10:09 EET [] [2344]: [14-1] user=,db=,e=00000 LOG: MultiXact member wraparound protections are now enabled
2015-11-13 02:10:09 EET [] [2342]: [4-1] user=,db=,e=00000 LOG: database system is ready to accept connections
2015-11-13 02:10:09 EET [] [2394]: [1-1] user=,db=,e=00000 LOG: autovacuum launcher started
Now, assuming that you want to perform PITR.
Follow exactly the same restore procedure as previously described but this time the recovery.conf should look like this :
restore_command = '<PATH/TO/pg_standby> -t /data/backups/failover.now -l /data/walarchive %f %p %r'
#recovery_target_time = '2015-11-13 00:09:00'
# or
#recovery_target_xid = '1966'
trigger_file = '/data/backups/failover.now'
#recovery_target_inclusive = 'true'
recovery_end_command = 'rm /data/backups/failover.now'
recovery_target_time: This parameter specifies the time stamp up to which recovery will proceed.
recovery_target_xid: This parameter specifies the transaction ID up to which recovery will proceed. Keep in mind that while transaction IDs are assigned sequentially at transaction start, transactions can complete in a different numeric order. The transactions that will be recovered are those that committed before (and optionally including) the specified one. The precise stopping point is also influenced by recovery_target_inclusive.
recovery_target_inclusive: Specifies whether to stop just after the specified recovery target (true), or just before the recovery target (false). Applies when either recovery_target_time or recovery_target_xid is specified. This setting controls whether transactions having exactly the target commit time or ID, respectively, will be included in the recovery. Default is true.
The rest of this procedure should be identical as previously described.
Automate the restore procedure on the backup server , set some maintenance crontab entries that will delete old backups and WALs and you are have yourself a very simple but efficient backup strategy.
Remember , that testing your backups is equally important with backups!
Thanks for reading
Vasilis
hello, Very nice Article!, But I have a question , suppose a delete statement without Where is run between 8 and 10 am and wanted to retrieve the information as it was just before deletion . How you could do this ?, there any way WAL files go applying successive step by step until the moment just before deletion ? thank you very much and apologize for my English
ReplyDeleteHey, if you don't have that delete logged, so you don't know the exact timestamp or transaction id that it happened you could restore to 8:00 then the database would pause showing :
Delete2015-11-13 02:08:10 EET [] [2344]: [7-1] user=,db=,e=00000 LOG: recovery stopping after reaching consistency
2015-11-13 02:08:10 EET [] [2344]: [8-1] user=,db=,e=00000 LOG: recovery has paused
check how things look (db will be read only) and advance forward by changing the recovery.conf, but you can't go backwards so you might have to restore twice if you pass the point that delete happened (one for finding the exact time and one for the actual restore). Generally its a good idea to have good logging in place so you know when things happened
Thanks for the answer, shame that can not be done "rollback" of the last WAL applied ...
DeleteIf you know relation name from which you deleted data ( and its OID with oid2name tool), you could use pg_xlogdump to get transaction id (or time stamp) when the data was deleted. Then you could use recovery_target_xid or recovery_target_time to restore to the point just before delete.
DeleteKami menyediakan review seputar Situs Domino Qiu Qiu dan Poker Online Indonesia yang bisa anda simak dan mainkan sekarang juga'
ReplyDeleteAgen Poker
Agen Domino
Situs Poker
Situs Domino
Poker Online
Domino Online
Domino Qiu
Domino Qiu qiu
Poker Online Terpercaya
Excellent post. This is very useful post. Thanks for sharing.
ReplyDeleteData recovery in chennai