Showing posts from 2018

Zero downtime upgrade PostgreSQL 10 to 11.

PostgreSQL 11 has been officially released and it's packed with exciting changes.Native declarative partitioning is more robust, parallelism is now allowing moreoperations to work in parallel, transactions are now supported in stored procedures and just in time compilation for expressions are just some of the new features. With every major version, Postgres is getting better, providing more and more tools to developers and better quality of life to the DBAs, making it harder to resist an upgrade. That said, while new major versions come with all these exciting improvements, they also come with internal system changes, making the database clusters incompatible across major versions. This means that upgrades up to Postgres 10, where logical replication was introduced, were impossible without downtime or 3rd party replication tools. Story time, Before we get into how to upgrade using logical replication, let's see how upgrading and replication evolved over the years.  pg_

Logical Replication in aws RDS

Recently Amazon annnounced that postgres 10 supports logical replication syntax and that it can work as either a publisher or a subscriber. The announcement can be found here . In order to test it out i made 2 10.4 RDS instances. Set them up to be in the same security, subnet and parameter groups so i ended up with something like this : user : vasilis password : Ap4ssw0rd db instance : lrnode1 & 2 dbname : lrtest port : 5432 only thing i had to change in order to enable logical replication is to change rds.enable_logical_replication to 1 in my parameter group. After verifying connectivity psql -h -U vasilis -d lrtest -W psql -h -U vasilis -d lrtest -W i created a dummy table on my publisher and subscriber (lrnode1 and 2) create table lr_test_tbl (     id int primary key,     name text ); on lrnode1 (publisher) i created a publication CREATE PUB

Implementing a "distributed" reporting server using some of postgres10 features.

Today i will try to show how strong Postgres 10 is by combining different features in order to create a "distributed" reporting server. The features that i will be using are : Logical Replication Partitioning Foreign Data Wrappers Table Inheritance  The scenario that we want to implement is the following :   We have one central point for inserts, that we will call Bucket, bucket is partitioned by range yearly.In my example we have 3 partitions for 2016, 2017, 2018 and each partition is logically replicated to 3 data nodes, each responsible for 1 year of data. Finally we have a reporting proxy that is responsible for all selects and connects to each node through foreign data wrappers. The setup consists in 5 docker containers that have the following roles., bucket, insert / update / delete, node2016, data holder for 2016, node2017, data holder for 2017, node2018, data holder for 2018, reporting proxy, main point for

Postgres10 in RDS, first impressions

As a firm believer of Postgres, and someone who runs Postgres10 in production and runs RDS in production, I've been waiting for Postgres10 on RDS to be announced ever since the release last fall. Well, today was not that day, but I was surprised to see that RDS is now sporting a "postgres10" instance you can spin up. I'm not sure if thats there on purpose, but you can be sure I jumped at the chance to get a first look at what the new Postgres 10 RDS world might look like; here is what I found.. The first thing that i wanted to test was logical replication. By default it was disabled with rds.logical_replication being set to 0. AWS console allowed me to change this, which also changed wal_level to logical so i started creating a simple table to replicate. I created a publication that included my table but thats where the party stopped. I can't create a role with replication privilege and i can't grant replication to any user : mydb => SELECT SESSION_

AWS Aurora Postgres, not a great first impression

Recently we had a customer facing some memory issues with an Aurora PostgreSQL. What was happening is that while loading data or creating GiST indexes they got the following error : 2018 - 01 - 04 14 : 03 : 06 UTC: 171 . 32 . 45 . 250 ( 35508 ):root @ test:[ 20457 ]:ERROR: out of memory 2018 - 01 - 04 14 : 03 : 06 UTC: 171 . 32 . 45 . 250 ( 35508 ):root @ test:[ 20457 ]:DETAIL: Failed on request of size 23 . 2018 - 01 - 04 14 : 03 : 06 UTC: 171 . 32 . 45 . 250 ( 35508 ):root @ test:[ 20457 ]: STATEMENT : INSERT INTO test_table1 VALUES ( '' , 'Manual' , '[,]' , 't' , 'f' , '2016-02-16 15:58:58.016626' , '' , '' ), ( '' , 'Manual' , '[,]' , 't' , 'f' , '2016-02-16 15:58:58.023136' , '' , '' ), ... At the same time they had a major drop in freeable memory