SpeedCurve synthetic monitoring and real user monitoring (RUM) let you track dozens of default performance-related metrics, as well as any custom metrics you choose to create. Here are definitions for the most common metrics our users care about, particularly those metrics that correlate to measuring the user experience.
SpeedCurve Synthetic is built on top of the leading open-source web performance testing framework WebPageTest, so our synthetic metrics are the same as what you're used to seeing in WebPageTest. In addition to those, SpeedCurve Synthetic also tracks unique metrics, such as Hero Rendering Times.
SpeedCurve LUX (RUM) tracks the same real user monitoring (RUM) metrics you may already be familiar with, as well as unique metrics like Interaction Times.
Backend (Synthetic and LUX)
Sometimes also called "First Byte". The time from the start of the initial navigation until the first byte of the base HTML page is received by the browser (after following redirects). This typically includes the bulk of backend processing: database lookups, remote web service calls, stitching together HTML, etc. This is a good approximation of the time it takes your server to generate the HTML on the server and then deliver it over the network to the user's browser.
Customer data (LUX)
Custom metrics (Synthetic and LUX)
Custom metrics allow you to measure the performance of specific page elements that you've identified as essential to the user experience for your own pages. The W3C User Timing spec provides an API for developers to add custom metrics to their web apps. This is done via two main functions:
- performance.mark records the time (in milliseconds) since navigationStart
- performance.measure records the delta between two marks
There are other User Timing APIs, but mark and measure are the main functions. SpeedCurve supports custom metrics, so once you've added marks and measures on your pages, you can collect data with both SpeedCurve LUX (RUM) and Synthetic.
DOM Content Loaded (Synthetic and LUX)
The DOM Content Loaded time is measured as the the time from the start of the initial navigation until the end of the DOMContentLoaded event.
First Meaningful Paint (Synthetic)
First Meaningful Paint (FMP) is "the paint after which the biggest above-the-fold layout change has happened, and web fonts have loaded." Chrome exposes this measurement as a "blink.user_timing" trace event with a name of "firstMeaningfulPaint". (This definition comes from Kunihiko Sakamoto's Time to First Meaningful Paint spec.)
Fully Loaded (Synthetic)
Hero Rendering Times (Synthetic)
Hero Rendering Times are a set of synthetic metrics that are unique to SpeedCurve. They measure when a page's most important content finishes rendering in the browser.
Largest Image Render identifies when the largest image in the viewport finishes rendering. This metric is especially relevant to retail sites, where images on home, product, and campaign landing pages are critical elements.
Largest Background Image Render is for those pages where the background image is just as – or more – important than the largest image. We created this metric to ensure that you're not missing out.
H1 Render measures when your first H1 element finishes rendering. This metric is especially useful to media and informational sites. Because the H1 tag is usually wrapped around header copy, there's a reasonable assumption that this is copy you want your users to see quickly.
First Painted Hero and Last Painted Hero are synthetic metrics that show you when the first and last pieces of critical content are painted in the browser. These are composite metrics for the Hero Rendering Times:
max(h1, (biggest_img || bg_img))
These composite metrics are computed by taking the minimum and maximum of the largest text time ("h1") and the biggest IMG time (or biggest background image if biggest IMG doesn't exist).
Hero Element Timing – which is based on the Hero Element Timing API – lets you select and annotate specific hero elements on your pages, such as search boxes, image carousels, and blocks of text. Right now, if you're a SpeedCurve user, you can follow the instructions in the API spec to annotate your pages, and see the results in your SpeedCurve results. (As a bonus, when browsers inevitably catch up and adopt the spec, you'll be ahead of the game.)
Interaction (IX) Metrics
For many websites, how quickly users engage with the page is an important user experience metric. If these metrics increase it could be a sign that the page is rendering slower, or a design change has made the site more difficult for users. This category captures metrics about how and how quickly users engage with the page.
- First IX Type - The first type of interaction the user had with the page: scroll, click, or keypress.
- First IX Time - When the first interaction time occurred (relative to navigationStart).
- Element ID clicked - The ID or data-sctrack attribute of the DOM element that was clicked or keypressed. See the LUX data-sctrack API for more information.
Page Load (Synthetic and LUX)
The Page Load time is measured as the time from the start of the initial navigation until the beginning of the window load event (onload). While Page Load can be a useful metric it can also be deceiving as depending on how the page is constructed it doesn't always represent when content is rendered to screen and the user can interact with the page. Unfortunately many organizations and other monitoring tools still default to reporting Page Load as an important performance metric. It's in no way a good measure of the user's experience and something the industry needs to move on from.
The Google PageSpeed score is a number out of 100 based on a set of best practice coding rules developed and updated by Google. A score over 85 indicates a page that is performing well.
Speed Index (Synthetic)
The Speed Index is the average time at which visible parts of the page are displayed. It's dependent on size of the view port. It represents how quickly the page rendered the user-visible content (lower is better). Speed Index is often the metric we show by default as it best represents the user's experience as the page rendered over time from starting completely blank to the viewport being visually complete.
In WebPageTest, the Speed Index is expressed in milliseconds. In SpeedCurve, we convert it to seconds, to make it consistent with your other metrics.
You can find out more about how WebPageTest calculates Speed Index below.
Start Render (Synthetic and LUX)
The Start Render time is measured as the time from the start of the initial navigation until the first non-white content is painted to the browser display. Any CSS and blocking JS you have on the page has to be downloaded and parsed by the browser before it can render anything to screen. This is called the critical rendering path and the Start Render metric is very important in understanding how long users have to wait before anything is displayed on screen.
Time to First Interactive (Synthetic)
TTFI measures "when the page is first expected to be usable and will respond to input quickly (with the possibility of slow responses as more content loads)." This definition comes from Pat Meenan's Time to Interactive spec.
Visually Complete (Synthetic)
Visually Complete is the time at which all the content in the viewport has finished rendered and nothing changed in the viewport after that point as the page continued loading. It's a great measure of the user experience as the user should now see a full screen of content and be able to engage with the content of your site.
The various CPU metrics show how long the browser main thread spent on computing and rendering the page. The CPU time is split into the categories "Loading, Scripting, Layout, Painting." We then also segment the CPU time by a few of the key browser events like "Start Render, Page Load and Fully Loaded."
If you just want to know the overall time that a page spent on a CPU category like "Layout" then use the "Fully Loaded" metrics.
Google Lighthouse Scores & Audits
For any tests run in Chrome we also run a Google Lighthouse audit which checks your page against rules for Performance, PWA, Best Practice and SEO. For each of the categories you get a score out of 100 and recommendations on what to fix which you can find on each test dashboard and aggregated together on the Improve dashboard.
The Lighthouse audit is always run using a Fast 3G network speed so that the scores are consistent.
- Brief history of performance metrics. (Includes a side-by-side analysis of current rendering metrics)
- Why are the SpeedCurve results different from my other test results?
- What data is collected by LUX for RUM?