At Lincoln Loop, we don’t just build web platforms from scratch and scale them to accommodate growing demand; we also dedicate ourselves to their long-term maintenance. Some of our partnerships have spanned over a decade.
Let me walk you through a performance optimization journey we undertook with a large publishing platform, which serves hundreds of thousands to millions of page views daily.
The Issue at Hand
Over the lifespan of a platform, various infrastructure changes and design tweaks can inadvertently impact the response time. For instance, adding more content to a page or changing the ordering of the results might bog it down. Continuous monitoring is critical because it allows us to spot such regressions quickly.
A live website’s behavior varies significantly from local testing. Production sites deal with multiple cache layers and handle countless concurrent requests. In contrast, during “local” development, you typically turn off these cache layers and test one endpoint at a time.
Thus, when embarking on performance optimization, always base your strategy on metrics from your production environment and tackle one modification at a time.
For this optimization, I utilized AWS CloudWatch for response time metrics on the load balancer and some data from Sentry’s performance analysis.
Here is Sentry’s trend graph for the URL endpoint we are optimizing.
Performance optimization can be a bottomless rabbit hole, so it is essential to start this process with a solid set of metrics around your production environment and only do one modification at a time.
The Diagnosis
The first step is to Identify the URLs causing the most significant slowdowns, or as we call them, the “offending URL families.”
Displaying a list of URL families in reverse order of “user misery” will give us the starting point.
Our prime suspect was the “/category/{category_slug}/” URL. At its heart, this view simply offers a paginated list of articles.
A closer examination of a Sentry event trace confirmed our suspicions: we were database (DB) bound. This is a common issue for content-rich sites.
Reproducing the issue locally, I wanted to inspect the generated query. For this, I utilized the kolo.app VSCode extension, known for its impressive visualization flamegraph.
The flame graph spotlighted two problematic queries. A closer look revealed these stemmed from the get_query_set
method on the view and Django’s default paginator, which employs .count()
to calculate page numbers.
The general shape of the SQL query
The Solution
Modification 1: Retrieve Only What’s Necessary
I began by trimming the fat. Instead of fetching every field (more than 130), I focused on retrieving only the essential fields:
This adjustment significantly reduced the .count() time from 607ms to 212ms and the article data query time from 1224ms to 858ms.
Modification 2: Static latest_date Calculation
Our next hurdle was ordering tables based on a latest_date, computed at query time using MySQL’s coalesce function.
To avoid this query time computation, we will denormalize latest_date into a statically saved field called publication_order_date generated when an article is saved. I will skip over the migration process.
This change has a profound impact on the query time that we can no longer see it on the flame graph. The query is now taking 9ms instead of 858ms.
The subquery used by the paginator to do the .count() is still fetching more data than it should.
Modification 3: count
Restrict the data fetched by the subquery used by to count the articles.
Now that we only retrieve data we are interested in, the .count() went from 212ms to 87ms.
Besides the three primary changes detailed earlier, I’ve implemented several other minor modifications. These collectively reduced the total queries on that view from 66 to 47. While each query might have taken only a few milliseconds, the cumulative effect can be significant on a busy database server.
In conclusion, the above discussion illustrates how, with relatively minimal code adjustments, you can enhance database performance by nearly 19 times when retrieving crucial data.