tag:blogger.com,1999:blog-826910957631479390.post9118984977569579717..comments2024-03-15T00:24:45.658-07:00Comments on Evol Monkey: AWS Aurora Postgres, not a great first impressionEvol Monkeyhttp://www.blogger.com/profile/01346397854558520528noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-826910957631479390.post-50661670917196563032018-01-12T08:09:31.852-08:002018-01-12T08:09:31.852-08:00I am one of the Product Managers for Amazon Aurora...I am one of the Product Managers for Amazon Aurora with PostgreSQL compatibility. Your problem description is similar to a GIST-related problem we have fixed in an upcoming patch. Please contact aurora-postgresql-support@amazon.com directly and I will be happy to help work through your issuekevinjhttps://www.blogger.com/profile/15295097266055922475noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-16814451075928299122018-01-11T14:05:39.563-08:002018-01-11T14:05:39.563-08:00Depending on the workload, performance with lower ...Depending on the workload, performance with lower shared_buffers might be affected, or it might not. If the working set of data no longer fits into memory when you reduce shared_buffers, then performance will be affected; if the working set didn't fit into memory before reducing shared_buffers, as would be typical with an analytics-oriented workload, then it probably won't have much of an impact.kevinjhttps://www.blogger.com/profile/15295097266055922475noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-59982937582758798132018-01-11T13:36:42.487-08:002018-01-11T13:36:42.487-08:00Thanks for your reply Kevin, increasing work mem s...Thanks for your reply Kevin, increasing work mem sounds reasonable but changing shared_memory would be tricky since it requires a restart. I've seen workloads where create index on temp tables is part of the application lifecycle, so i was wondering if you could share any insight on how performance with a low(er) shared buffers setting would be compared to the default.Evol Monkeyhttps://www.blogger.com/profile/01346397854558520528noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-1246168218024447922018-01-11T13:35:52.084-08:002018-01-11T13:35:52.084-08:00This comment has been removed by the author.Evol Monkeyhttps://www.blogger.com/profile/01346397854558520528noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-33164797325395664592018-01-11T13:20:19.711-08:002018-01-11T13:20:19.711-08:00(Full disclosure: I am the Product Manager for Ama...(Full disclosure: I am the Product Manager for Amazon Aurora with PostgreSQL compatibility). We set the default shared_buffers to 75% of total memory because, as you state, there is no file system cache with Aurora Storage. If you need to support large index creates, or queries that do sorts on large tables, etc, then you should consider increasing the work_mem parameter. However, you may need to concurrently decrease shared_buffers to avoid running out of memory. We are working on recommendations / best practices for setting Amazon Aurora PostgreSQL memory-related parameters based on your workload, and will publish those soon.kevinjhttps://www.blogger.com/profile/15295097266055922475noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-74805827408080616772018-01-11T00:28:40.723-08:002018-01-11T00:28:40.723-08:00My employer is also evaluating Aurora and we notic...My employer is also evaluating Aurora and we noticed that the default memory limits are much more aggressive than on classic RDS. We were told that the cause of this was that Aurora does not rely on or even use OS cache. Which means that instance RAM that is not provisioned to shared_buffers and not currently used for work_mem by queries is simply wasted. So default Aurora memory limits often caused less than 5% of freeable memory. So it's quite easy to crash it with some memory using queries.Tomasz Ostrowskihttps://www.blogger.com/profile/01427530380619900599noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-75159106330554120032018-01-11T00:26:57.075-08:002018-01-11T00:26:57.075-08:00it only happens on amazon , i tried doing this on ...it only happens on amazon , i tried doing this on a normal postgres and it didn't sweat , which makes sense, such memory leaks in postgres are extinct for a very long time now.Evol Monkeyhttps://www.blogger.com/profile/01346397854558520528noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-88221977340410718812018-01-11T00:25:25.878-08:002018-01-11T00:25:25.878-08:00Wow , thank you very much for your reply, I will d...Wow , thank you very much for your reply, I will definitely do that :)Evol Monkeyhttps://www.blogger.com/profile/01346397854558520528noreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-14529986159080052432018-01-10T23:24:02.504-08:002018-01-10T23:24:02.504-08:00Hi Vasilis, I have faced the same problem after mi...Hi Vasilis, I have faced the same problem after migrating to Aurora (coming from RDS).<br /><br />I issued a bugreport with Amazon how solved it but in a private patch for us. I will be publicly available with Aurora 1.1 which is to be released sometime in January.<br /><br />In the meantime you should file a bugreport and get the same private patch..<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-826910957631479390.post-30480202680063341472018-01-10T14:13:47.026-08:002018-01-10T14:13:47.026-08:00That looks like memory leak. Does it happen only ...That looks like memory leak. Does it happen only on Amazon? Could you try to reproduce it on plain PostgreSQL instance?Alexander Korotkovhttps://www.blogger.com/profile/11381643285943532656noreply@blogger.com