From MSSQL to Cloud Postgres: Unexpected Slowdown and Valuable Lessons Learned...
Image generated using AI

From MSSQL to Cloud Postgres: Unexpected Slowdown and Valuable Lessons Learned...


Migrating to the cloud promises a world of benefits – scalability, cost-efficiency, and improved performance. At least, that was what we expected when we decided to move our on-premise SQL Server database workload to Google Cloud Platform (GCP) Cloud SQL for Postgres.

Our database was a relatively small player, clocking in at just around 1GB. We figured the migration would be a breeze, and the performance on the cloud-based Postgres instance would be stellar. Well, that wasn't quite the case.

Hitting a Performance Wall

Everything started smoothly. The migration itself went according to plan, and our application seemed to be functioning normally. However, soon after, we started experiencing significant slowdowns, particularly in one critical area. Simple select queries on a specific table were taking an agonizingly long time – over 6 seconds to be exact! This caused timeouts and a frustrating user experience.

The culprit? A table with a whopping 100+ columns. This table was intentionally denormalized in our original MSSQL implementation to optimize performance for a specific set of queries. While denormalization can be a valid strategy in some MSSQL scenarios, it backfired in Postgres. We initially thought throwing more hardware at the problem would be the answer. So we opted for a smaller VM instance for our database, but the performance remained sluggish. We even consulted with cloud experts, hoping for a magic bullet solution. Unfortunately, the focus kept going back to VM size, which didn't seem like the right long-term approach.

The Plot Twist: Schema Design and Postgres

In the end, the solution wasn't about brute VM force, but rather a deeper understanding of Postgres itself. Here's what we learned:

  • Schema Design Matters: Unlike MSSQL, Postgres thrives on well-structured data models. Our table with 100+ columns was a prime example of poor schema design. By implementing proper normalization techniques and reducing unnecessary columns, we could significantly improve query performance.
  • The Power of Indexing: Proper indexing is crucial for efficient query execution in Postgres. While indexing strategies might differ slightly from MSSQL, well-placed indexes can dramatically speed up retrieval of specific data sets.
  • VM Size as a Band-Aid: While increasing the VM size might have provided some temporary relief, it's not a sustainable solution. Focusing on query optimization and database tuning techniques can yield long-term performance gains.

Migrating to the cloud can be a game-changer, but it's important to understand the potential adjustments needed. By embracing the strengths of Postgres, like its preference for well-structured data and strategic indexing, we were able to overcome the initial performance hurdles.

We're happy to be on GCP now, enjoying the benefits of scalability and cost-effectiveness. This experience has also underscored the importance of schema design and proper database management practices, regardless of the platform.

Prasanth Janardhanan

Sr. Director of Engineering #Technology #DigitalTransformation #XBORDERTRADE

10mo

Why did you not opt for GSQL? Was there any specific reason to go for Postgres, considering you had a small database?

Like
Reply

To view or add a comment, sign in

More articles by Basil Abraham

Insights from the community

Others also viewed

Explore topics