SpeedCurve synthetic monitoring and real user monitoring (RUM) let you track dozens of performance metrics, including any custom metrics you choose to create. Here are definitions for the most common metrics our users care about, particularly those that correlate to measuring the user experience.
The percentage of page views where the user leaves the page before it finishes loading (i.e., before the onload event). Abandonment rate is often an indicator that the user thinks the page is too slow and left in frustration.
These metrics count the number of stylesheets and scripts that are loaded asynchronously. Refer to Blocking CSS|JS Requests for more information.
The DOM is a tree structure. A DOM element's depth is how far it is from the root of the tree, i.e., the number of ancestors. This is the average depth across all DOM elements. It gives an indication of the complexity of the DOM tree which can impact performance, especially for CSS.
Stylesheets and scripts have the biggest impact on rendering. It's possible to load them asynchronously, but the majority of scripts and stylesheets today are loading synchronously which blocks rendering. Synchronous stylesheets block the entire page, no matter where they occur in the page. But synchronous scripts only block DOM elements that occur after the SCRIPT tag. Therefore, if a script is loaded synchronously but occur after the last visible DOM element, they are not counted as "blocking" (since they don't block rendering of anything visible). Learn more.
Bounce rate is the percentage of Bounced Sessions out of total Sessions.
A bounced session is a session with only one page view. See Sessions for more information.
Sometimes also called "Time to First Byte" or "TTFB". The time from the start of the initial navigation until the first byte of the base HTML page is received by the browser (after following redirects). In RUM this is responseStart from Navigation Timing specification. This typically includes the bulk of backend processing: database lookups, remote web service calls, stitching together HTML, etc. This is a good approximation of the time it takes your server to generate the HTML on the server and then deliver it over the network to the user's browser.
Segmenting real users by connection type is available in RUM. This data comes from the Network Information API. As of January 2019, this is available from Chrome, Opera and Android. RUM reports the effective connection type and maps the values as follows:
- Slow 2G => Very slow (min. RTT 2000ms; max. downlink 50 Kbps)
- 2G => Slow (min. RTT 1400ms; max. downlink 70 Kbps)
- 3G => Medium (min. RTT 270ms; max. downlink 700 Kbps)
- 4G => Fast (min. RTT 0ms; max. downlink ∞ Kbps)
The various CPU metrics show how long the browser main thread spent on computing and rendering the page. The CPU time is split into the categories "Loading, Scripting, Layout, Painting."
We use the Long Tasks API to help you track how CPU affects your users. A "long task" is defined as any process that consumes more than 50ms on the browser main thread. As of January 2019, this is available from Chrome, Opera and Android. In RUM, we show:
- JS Long Tasks - the sum in milliseconds of all the Long Tasks
- JS Longest Task - the longest Long Task from a page
- JS Number of Long Tasks - the number of Long Tasks in a page
Cumulative Layout Shift is a score that captures how often a user experiences unexpected layout shifts as the page loads. Often ads that load late in the page load can push important content around while a user is already reading it. Aim for a score less than 0.1. Here's how the score is calculated.
Cumulative Layout Shift is one of Google's Core Web Vitals, along with Largest Contentful Paint and First Input Delay.
Custom metrics allow you to measure the performance of specific page elements that you've identified as essential to the user experience for your own pages. The W3C User Timing spec provides an API for developers to add custom metrics to their web apps. This is done via two main functions:
- performance.mark records the time (in milliseconds) since navigationStart
- performance.measure records the delta between two marks
There are other User Timing APIs, but mark and measure are the main functions. SpeedCurve supports custom metrics, so once you've added marks and measures on your pages, you can collect data with both SpeedCurve RUM and Synthetic.
Device memory reports device RAM in gigabytes. It is based on the Device Memory specification. As of September 2019 that specification is a draft and is only supported in Chrome and Opera.
The DOM Content Loaded time is measured as the the time from the start of the initial navigation until the end of the DOMContentLoaded event.
Element Timing measures when elements are displayed. As of September 2019 it is a draft specification supported in Chrome. See Hero Element Timing.
First Contentful Paint (FCP) is provided by browsers as part of the Paint Timing spec. It's the time at which users can start consuming page content. Specifically, it is "the time when the browser first rendered any text, image (including background images), non-white canvas or SVG." As of January 2019, this is available from Chrome, Opera and Android.
Formerly known as First Interactive. It changed to First CPU Idle in Lighthouse 3.0 to more clearly describe how it works.
First CPU Idle measures when a page is minimally interactive:
- Most, but maybe not all, UI elements on the screen are interactive.
- The page responds, on average, to most user input in a reasonable amount of time.
Specifically, it is the first span of 5 seconds where the browser main thread is never blocked for more than 50ms after First Contentful Paint. As of January 2019, this is available from Chrome, Opera and Android.
First Input Delay is one of Google's Core Web Vitals, along with Largest Contentful Paint and Cumulative Layout Shift.
First Meaningful Paint (FMP) is "the paint after which the biggest above-the-fold layout change has happened, and web fonts have loaded." Chrome exposes this measurement as a "blink.user_timing" trace event with a name of "firstMeaningfulPaint". Note that this is not a W3C spec so is only available in Chrome.
For any synthetic tests run in Chrome we also run a Google Lighthouse audit which checks your page against rules for Performance, PWA, Best Practice and SEO. For each of the categories you get a score out of 100 and recommendations on what to fix which you can find on each test dashboard and aggregated together on the Improve dashboard. We run the latest release of Lighthouse.
This is an aggregate metric that captures user happiness. Learn more.
Hero Rendering Times are a set of synthetic metrics that are unique to SpeedCurve. They measure when a page's most important content finishes rendering in the browser.
Largest Image Render identifies when the largest image in the viewport finishes rendering. This metric is especially relevant to retail sites, where images on home, product, and campaign landing pages are critical elements.
Largest Background Image Render is for those pages where the background image is just as – or more – important than the largest image. We created this metric to ensure that you're not missing out.
H1 Render measures when your first H1 element finishes rendering. This metric is especially useful to media and informational sites. Because the H1 tag is usually wrapped around header copy, there's a reasonable assumption that this is copy you want your users to see quickly. If there are no H1 elements, then H2 is used.
First Painted Hero and Last Painted Hero are synthetic metrics that show you when the first and last pieces of critical content are painted in the browser. These are composite metrics for the Hero Rendering Times:
max(h1, (biggest_img || bg_img))
These composite metrics are computed by taking the minimum and maximum of the largest text time ("h1") and the biggest IMG time (or biggest background image if biggest IMG doesn't exist).
Hero Element Timing – which is based on the Hero Element Timing API – lets you select and annotate specific hero elements on your pages, such as search boxes, image carousels, and blocks of text. Right now, if you're a SpeedCurve user, you can follow the instructions in the API spec to annotate your pages, and see the results in your SpeedCurve results. (As a bonus, when browsers inevitably catch up and adopt the spec, you'll be ahead of the game.)
Learn more about how we identify hero rendering metrics, plus how to add element timing.
In RUM, this is the size of the HTML document based on the transferSize property from the Resource Timing spec.
This is the number of images that are in the browser's viewport.
This metric shows you how much smaller your images could be if they were all optimized using best practises. For example, an Image Optimization Reduction Size of 0K would mean that your images are as optimized as possible.
This is the uncompressed size of all inline <style> and <script> tags in the page at the beginning of the window load event (onload). It only includes content inside the tags, i.e. <script src="page.js"></script> does not count towards the size, whereas <script>alert("hello")</script> has a size of 14 bytes.
For many websites, how quickly users engage with the page is an important user experience metric. If these metrics increase it could be a sign that the page is rendering slower, or a design change has made the site more difficult for users. This category captures metrics about how and how quickly users engage with the page.
- First IX Type - The first type of interaction the user had with the page: scroll, click, or keypress.
- First IX Time - When the first interaction time occurred (relative to navigationStart). You'll see this reported as: Click Interaction Time (when the user first clicked anywhere on the page), Key Interaction Time (when the user first pressed any key on their keyboard), Scroll Interaction Time (when the user first scrolled), or just Interaction Time (any of the previous interactions).
- Element ID clicked - The ID or data-sctrack attribute of the DOM element that was clicked or keypressed. See the RUM data-sctrack API for more information.
- JS Errors per 1K Pages - The total number of JS errors may fluctuate based on daily traffic patterns. JS Errors per 1K Pages is a more stable metric based on the number of errors as a percentage of page views.
Largest Contentful Paint is provided by browsers via the Largest Contentful Paint API. It's the time at which the largest element in the viewport is rendered. It's only tracked on certain elements, e.g., IMG and VIDEO (see more here). As of August 2019, this is available in Chrome 77+.
Largest Contentful Paint is one of Google's Core Web Vitals, along with Cumulative Layout Shift and First Input Delay.
Navigation type indicates how the user navigated to the page, e.g., "normal" and "reload". It's based on Navigation Timing Level 2.
The Page Load time is measured as the time from the start of the initial navigation until the beginning of the window load event (onload). For SPAs using RUM, it's the time between LUX.init and LUX.send. While Page Load can be a useful metric it can also be deceiving as depending on how the page is constructed it doesn't always represent when content is rendered to screen and the user can interact with the page. Unfortunately many organizations and other monitoring tools still default to reporting Page Load as an important performance metric. It's in no way a good measure of the user's experience and something the industry needs to move on from.
In RUM, metrics are based on a "page view". In normal pages, the RUM data is sent as part of the page onload event. For Single Page Apps, a "page view" is defined by the time between LUX.init and LUX.send. Learn more about RUM and SPAs.
This is the time it takes from when the main HTML document is request to when the first byte is returned (i.e, responseStart - requestStart from Navigation Timing). This is different from backend time as it does not include redirects, DNS lookups, etc.
Session length is the number of page views in a session.
Page views are grouped into sessions where all the page views are from the same browser and there is no more than 30 minutes between page views. In other words, if a user is visiting your website and stops for an hour lunch, when they return their page views will be grouped into a new session. In addition, sessions are limited to a 24 hour maximum, so after 24 hours a new session ID is created.
Speed Index is the average time at which visible parts of the page are displayed. It's dependent on size of the view port. It represents how quickly the page rendered the user-visible content (lower is better). Speed Index is often the metric we show by default as it best represents the user's experience as the page rendered over time from starting completely blank to the viewport being visually complete.
Learn more about how Speed Index is measured and why it doesn't work for all pages.
The Start Render time is measured as the time from the start of the initial navigation until the first non-white content is painted to the browser display. Any CSS and blocking JS you have on the page has to be downloaded and parsed by the browser before it can render anything to screen. This is called the critical rendering path and the Start Render metric is very important in understanding how long users have to wait before anything is displayed on screen.
In RUM, Start Render is the value of First Paint from the Paint Timing spec. As of January 2019, this is not available in Safari and Mobile Safari, In synthetic, it's based on analyzing the tenth-of-a-second screenshots to see when rendering begins.
This metric has been renamed First CPU Idle in order to be consistent with Lighthouse 3.0. (Scroll up for definition.)
Formerly known as Time to Consistently Interactive. Renamed to be consistent with Lighthouse 3.0.
Time to Interactive (TTI) measures how long it takes a page to become interactive. "Interactive" is defined as the point where:
- The page has displayed useful content, which is measured with First Contentful Paint.
- Event handlers are registered for most visible page elements.
- The page responds to user interactions within 50 milliseconds.
Specifically, it is the first span of 5 seconds where the browser main thread is never blocked for more than 50ms after First Contentful Paint with no more than 2 in-flight requests. In other words, it's the same as First CPU Idle but it also checks the number of in-flight network requests.
Total Blocking Time (TBT) is one of Google's recommended Core Web Vitals metrics for lab/synthetic testing. While it's great that Web Vitals includes a CPU metric, there are a couple of caveats you should be aware of if you're tracking Total Blocking Time (you can read more about those caveats here):
- Total Blocking Time only tracks the long tasks between First Contentful Paint (FCP) and Time To Interactive (TTI). For this reason, the name "Total Blocking Time" is a bit misleading. Any long tasks that happen before FCP and block the page from rendering are NOT included in Total Blocking Time. TBT is not in fact the "total" blocking time for the page. It's better to think of it as "blocking time after start render".
- TBT doesn't include the first 50ms of each long task. Instead, it reports just the time spent over the first 50ms. A user still had to wait for that first 50ms. It would be easier to interpret TBT if it included the first 50ms and better represented the full length of time a user was blocked by a long task.
Visually Complete is the time at which all the content in the viewport has finished rendered and nothing changed in the viewport after that point as the page continued loading. It's a great measure of the user experience as the user should now see a full screen of content and be able to engage with the content of your site.
Updated 18 days ago