Skip to content

Analyze a site’s server performance

Analyzing a site’s performance, and making any necessary adjustments to improve it, will help to ensure a site’s performance during unexpected high traffic situations, including unpredictable types of request spikes. It can also improve a site’s performance during normal traffic patterns, and increase resiliency during expected high traffic events.

The most critical metrics for site resilience and performance are cache hit rate and the speed of page generation for origin requests (i.e. uncached requests).

Professional Service Upgrade

Customers who add WordPress VIP Performance Service to their support package can work with a team of VIP’s skilled engineers to address needs and goals related to site performance, conduct performance testing, and identify opportunities to improve the performance of its WordPress VIP hosted website. 

A site’s performance should be analyzed as soon as possible, and performed on a regular cadence. Performance issues can happen at any time, not only during a high-traffic event.

It is common for development teams to attempt to deliver new features, then to be diverted by a need to make last-minute adjustments for performance optimization as the deadline for the high traffic event approaches. Do not wait until just before a major event to optimize a site’s performance. This can result in compromises and unsatisfactory outcomes.

Performance issues can be hidden

Many sites operate well under normal traffic but performance issues can occur due to a minor change. For example, as the number of posts, comments, taxonomies, and taxonomy terms increases over time, database queries can approach a threshold where they may need to use physical disk space to sort or scan. In that scenario, a single bot crawling a number of uncached pages—or a slight increase in traffic—can easily push a site into more obvious poor performance. The poor performance existed the entire time, but it was less visible.

Optimize page cache hit rate

VIP’s edge cache servers increase the speed of responses for commonly requested URLs. The edge cache protects and buffers a site’s origin servers from increased load (e.g. traffic spikes) with a full-page cache.

The page cache hit rate for an environment can be reviewed in the Insights & Metrics panel of the VIP Dashboard.

The cache “hit rate” is the percentage of requests served by the edge cache as compared to the requests that bypass cache. Requests that bypass cache result in pages being rendered on the origin server. A large volume of uncached requests can overload available resources and lead to an increase in responses with a 503 HTTP status code.

If requests to a site have a high cache hit rate, the site’s backend resources (web containers, database, and object cache) will be less busy and more available to handle sudden increases in traffic to the site. This can include traffic to the front end of a site as well as a large increase in content creation occurring on the backend by multiple editors.

Cache hit rate can be increased by avoiding additional query parameters, user cookies, and non-GET requests that bypass the cache.

Cache-control headers sent by an application control how long a page or response is retained in cache. The routes for site endpoints that rarely change should be set to a higher Time To Live (or max-age), which can also improve the cache hit rate.

Optimize page generation time

“Page generation time” is the duration of time required for a site’s server resources to process, understand, and respond to a particular request. Users of a site appreciate fast generation times, and fast page generation times are critical for a site’s resilience.

The duration of time it takes for the origin server to generate a response and send it back to the page cache can be reviewed in the “HTTP Origin Response Time” graph in the Insights & Metrics panel of the VIP Dashboard.

Continually monitoring the Apdex score in New Relic provides a baseline performance indicator. Request logs are helpful for identifying the slowest requests and determining average page generation times.

Requests that bypass the edge cache must be served by the origin servers (an application’s code on a web container). These responses should be generated as quickly as possible. Even with autoscaling, resources are not infinite: the higher the demand for resources, the less resilient the site is to variations in traffic load.

For uncached requests, faster page generation time can be improved by many contributing factors including:

All of the above refinements are important and they work in tandem with each other.

Optimize database queries

Optimizing core queries at scale is even more important as a site’s posts table in the database grows larger. A large number of posts can easily result in standard core database queries needing to use the filesystem to sort results even in cases where only a few posts are being returned. When the database frequently uses the filesystem, other fast queries will be delayed and overall performance will be reduced. Most queries can be optimized to increase their speed.

Leverage the object cache

It is best not to completely rely on the object cache to eliminate all issues associated with a slower query or function. The object cache values will inevitably need to be replaced, and cache stampedes combined with slow response times can cause disruptions. Instead, optimize the underlying slow queries and functions as much as possible to reduce the time window of a potential cache stampede (i.e. replace the cache quickly). If necessary, also implement logic to reduce the chance of a stampede by serving stale data during the update period.

Remove remote requests

Remote requests on the frontend of a site (e.g. requests to an external API) can contribute to longer page generation times. This can be mitigated by either adding extra caching for the requests, offloading the API requests to a cron job, or removing the requests entirely. All of these approaches will help to prevent a site’s performance from being negatively affected by the performance of the external resource.

Maintain WordPress and PHP at their latest versions

The versions of WordPress and PHP that are running on a site should always be the latest release version. WordPress core and PHP are actively maintained and newer versions provide added features and performance enhancements. By updating to the latest release versions, a site automatically benefits from the collective effort of the hundreds of experts who contribute to WordPress core and PHP.

Last updated: May 15, 2024

Relevant to

  • Node.js
  • WordPress