Posts

Showing posts from 2017

Psql directly from Atom text editor.

Im not a big fun of postgres clients like pgadmin, not sure why, but i always favoured psql over everything. A friend of mine wrote an atom package that brings psql into atom. Which is awesome. It's tested in linux and (briefly) in osx. In osx i only had to change the path for psql because for some reason it didn't get it from the environment. Feel free to check it out and spam him with any issues you might have. Link for the package can be found  here . Thanks for reading. Vasilis Ventirozos OmniTI Computer Consulting

Monitoring statements in pg10

Recently we migrated our first client to postgres 10.1, the database used to run on 9.2. Everything went smooth with a total downtime of 12 seconds but obviously monitoring broke so i had to fix that and i thought to share the new statements to save people some trouble. The statements that broke had to do with the change of xlog -> wal (and location -> lsn) which mostly affected replication monitoring statements. Bellow is what i used to have on PG 9 and what i got for PG 10. Replication Slot monitoring Postgres 9: SELECT slot_name, database , active,pg_xlog_location_diff(pg_current_xlog_insert_location(), restart_lsn AS retained_bytes FROM pg_replication_slots; Replication Slot monitoring Postgres 10: SELECT slot_name, database , active, pg_wal_lsn_diff (pg_current_wal_insert_lsn(), restart_lsn) AS retained_bytes FROM pg_replication_slots; Replication Lag monitoring Postgres 9 : SELECT pg_xlog_location_diff(pg_current_xlog_insert_locat

PostgreSQL on ZFS

What is ZFS. (a paste from wikipedia) ZFS is a combined file system and logical volume manager designed by Sun Microsystems. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. Advantages of ZFS. Data integrity guaranteed. ZFS has volume management on filesystem level, this enables copy on write. When a data block changed, it will change its location on the disk before the new write is complete. If you have a crash, that data would be damaged. ZFS does not change the location of the data until the write is verified, which means that your data is safe during a crash. ZFS also use checksums to ensure that all data is correct, every write is tested. ZFS doesn't just protect data with copy on write, it also offers additional

PostgreSQL on ZFS with BPF tracing on top.

At OmniTI we love solaris, my personal favourite features are ZFS and DTrace. Unfortunately not many run postgres on solaris so i have decided to implement similar features in linux. Instead of Dtrace i'll install BPF, in-kernel bytecode that can be used for tracing introduced in recent kernels (4.X).  This post will be a part of a three series post. In this post we'll start with setup, in part #2 with ZFS and how to use it for backups / snapshots. In part #3 we'll dig into BPF a bit more. Step 1 is to setup a new ubuntu. I setup a VM using  ubuntu-16.04.2-server-amd64.iso. As root : Add the repo for bcc : > echo "deb [trusted=yes] https://repo.iovisor.org/apt/xenial xenial-nightly main" | sudo tee /etc/apt/sources.list.d/iovisor.list sudo apt-get update Install all necessary and some optional packages : > apt-get install -y sudo wget apt-transport-https joe less build-essential libreadline-dev \ zlib1g-dev flex bison libxml2-dev libxslt-dev l

An unusual upgrade

I have mentioned in previous posts that in my 4 years with OmniTI , we've tackled a lot of migrations. Most of them are usually the "typical" procedure. The methodology we use is more or less explained here . Last week we had a usecase for a kind of "unusual" upgrade, a 9.2 compiled with  "--disable-integer-datetimes" meaning that all datetimes were represented as floating point internally, something that was the default at up to 8.3. This changed at (i think) 8.4 where datetimes were represented as int64 which offers more precision.  The requirement was to migrate the database to a new one that will use integer datetimes with the minimum possible downtime. Obviously a direct upgrade wouldn't work and pg_dump / restore was not an option so we decided to approach and test this scenario differently. The general idea is the following : Upgrade to a 9.6 that was compiled with "--disable-integer-datetimes" and then using something like p

Tip for faster wal replay on a slave

I've been in situations where i need a slave db to replay a lot of wal files fast, and by a lot i mean tens of thousands. This could happen because of a reporting database refreshing or simply because a slave was down for an extended period of time. It's known that lowering shared_buffers speeds up wal replay for obvious reasons, but by how much ? I did a benchmark on an old server and the results are interesting : With 32GB of shared buffers and with 6390Mb of wals (1840 wal files) it took 1408 seconds to complete the replay. With 64MB of shared buffers and with 6510Mb of wals (1920 wal files) it took 1132 seconds to complete the replay. My test was done by stopping the slave, inserting 50 mil rows to a test table, wait for the wal transfer to complete, then stop the master and start the slave and watch OmniPITR logs. The performance gain in wal replay was about 20% in postgres 10beta1 which doesn't sound bad, especially in times of need. Thanks for readin

Reusing an old master (as slave) after a switchover

Todays blogpost is old news but it's not very well known and it has to do with how to add an old master after a slave has been promoted.  Fujii Masao  explained the situation in his patch back in the day. So in todays post i will be demonstrating a combination of replication slots for retaining the writes on a promoted slave and how to re-introduce an old master to the replication. Say that we have a master-slave setup, stop the master gracefully and create and activate a replica slot on the slave : monkey=# SELECT * FROM pg_create_physical_replication_slot('this_is_a_replica_slot');        slot_name        | xlog_position ------------------------+---------------  this_is_a_replica_slot | (1 row) postgres@bf9823730feb:~$ pg_receivexlog -D . -S this_is_a_replica_slot -v -h 10.0.0.3 -U repuser pg_receivexlog: starting log streaming at 0/4000000 (timeline 1) ^C pg_receivexlog: received interrupt signal, exiting pg_receivexlog: not renaming "000000010000000

PostgreSQL statistics as data

Image
 Postgres provides a lot of information when it comes to statistics. Only problem is that all the statistics are frozen in time the moment you query the system catalog views. Sometimes you need to see whats happening over time or in the past,and this is easily solvable with a script and a crontab entry. At  OmniTI we use a project called system_monitoring a lot. This is basically a perl scheduler that runs as a deamon exporting useful information from postgres (and more) to tab delimited text files for later investigation. If you have decent one liner skills this is very helpful: you can answer questions about what happened yesterday night, what was running, who locked who etc. It's very good but data analysis can be a pain in the ass sometimes, especially if you need to reimport to a db and analyze with SQL. So I decided to approach this in a different way, by storing the statistics to a schema. Thats why i wrote statz .  Statz is a very simple python script that will gather al

PostgreSQL upgrades - Methodology

 Yesterday, i came across a conversation in postgres slack channel, someone was interested in upgrading his database and i realised that there is a pattern i see all the time. Those who are interested in upgrading major postgres versions, only ask questions about pg_upgrade. Don't get me wrong, pg_upgrade is awesome its well documented , and it explains things much better than i can.  During my time in OmniTI we've done many... MANY upgrades some of them pretty challenging, so i have come to realise that the upgrade tool is not as important as the plan and the methodology, especially during an upgrade or a migration. For this reason this post won't be about how to use pg_upgrade, instead it will be all about methodology, minimising downtime and reducing risk to almost zero. For this post i will be showing the steps i would follow in order to upgrade an 8.4.22 to 9.6.1. I will be using docker containers using a dockerfile that i wrote for lab use (more info about this d