We helped a customer recently who was storing 500 million chat messages in a single Postgres table. As the table was growing, their queries were slowing down. By Michel Pelletier.
The simple solution we offered was table partitioning. The difficulty was creating the partitions and migrating the data without blocking queries and downtime. This post explores the method we used to solve this - Dynamic Table Partitioning.
Further in this article:
- Why Partition Data? The large table problem
- Starting small but getting big
- Dynamic partitioning with pl/pgSQL
- Creating parent tables
- Creating dynamic child tables
- Progressively copying data from the large tables
- Setting up a daily cron job to create partitions
While there is some complexity, I hope this example has given you some ideas on how you too can partition your data for optimal query performance. Nice one!
[Read More]