Glossary
Glossary of metrics you can track with Synthetic and Real User Monitoring.
SpeedCurve synthetic monitoring and real user monitoring (RUM) let you track dozens of performance metrics, including any custom metrics you choose to create. Here are definitions for the most common metrics our users care about, particularly those that correlate to measuring the user experience. We've also included information about major browser family support (Chrome, Firefox, Safari) as well as support for SPA navigations.
Abandonment Rate (RUM - All Browsers, SPA)
The percentage of page views where the user leaves the page before it finishes loading (i.e., before the onload event). Abandonment rate is often an indicator that the user thinks the page is too slow and left in frustration.
Async CSS|JS Requests (Synthetic and RUM - All Browsers, SPA)
These metrics count the number of stylesheets and scripts that are loaded asynchronously. Refer to Blocking CSS|JS Requests for more information.
Average DOM Depth (Synthetic and RUM - All Browsers, SPA)
The DOM is a tree structure. A DOM element's depth is how far it is from the root of the tree, i.e., the number of ancestors. This is the average depth across all DOM elements. It gives an indication of the complexity of the DOM tree which can impact performance, especially for CSS.
Backend (Synthetic and RUM - All Browsers, SPA)
Sometimes also called "Time to First Byte" or "TTFB". The time from the start of the initial navigation until the first byte of the base HTML page is received by the browser (after following redirects). In RUM this is responseStart from Navigation Timing specification. This typically includes the bulk of backend processing: database lookups, remote web service calls, stitching together HTML, etc. This is a good approximation of the time it takes your server to generate the HTML on the server and then deliver it over the network to the user's browser.
Blocking CSS|JS Requests (Synthetic and RUM - All Browsers, SPA)
Stylesheets and scripts have the biggest impact on rendering. It's possible to load them asynchronously, but the majority of scripts and stylesheets today are loading synchronously which blocks rendering. Synchronous stylesheets block the entire page, no matter where they occur in the page. But synchronous scripts only block DOM elements that occur after the SCRIPT tag. Therefore, if a script is loaded synchronously but occur after the last visible DOM element, they are not counted as "blocking" (since they don't block rendering of anything visible). Learn more.
Bounce Rate (RUM - All Browsers, SPA)
Bounce rate is the percentage of Bounced Sessions out of total Sessions.
Bounced Sessions (RUM - All Browsers, SPA)
A bounced session is a session with only one page view. See Sessions for more information.
Connection Type (RUM - Chrome, SPA)
Segmenting real users by connection type is available in RUM. This data comes from the Network Information API. As of January 2019, this is available from Chrome, Opera and Android. RUM reports the effective connection type and maps the values as follows:
- Slow 2G => Very slow (min. RTT 2000ms; max. downlink 50 Kbps)
- 2G => Slow (min. RTT 1400ms; max. downlink 70 Kbps)
- 3G => Medium (min. RTT 270ms; max. downlink 700 Kbps)
- 4G => Fast (min. RTT 0ms; max. downlink ∞ Kbps)
CPU time (Synthetic and RUM - Chrome, SPA)
As more and more sites switch to using large Javascript frameworks and manipulating the page using Javascript, the execution time this code takes and the available CPU can instead become the performance bottleneck.
The various CPU metrics show how long the browser main thread spent on computing and rendering the page. The CPU time is split into the categories "Loading, Scripting, Layout, Painting."
We use the Long Tasks API to help you track how CPU affects your users. A "long task" is defined as any process that consumes more than 50ms on the browser main thread. As of January 2019, this is available from Chrome, Opera and Android. In RUM, we show:
- JS Long Tasks - the sum in milliseconds of all the Long Tasks
- JS Longest Task - the longest Long Task from a page
- JS Number of Long Tasks - the number of Long Tasks in a page
Cumulative Layout Shift (CLS) (Synthetic and RUM - Chrome, SPA)
Cumulative Layout Shift is a score that captures how often a user experiences unexpected layout shifts as the page loads. Often ads that load late in the page load can push important content around while a user is already reading it. Aim for a score less than 0.1. Here's how the score is calculated.
Cumulative Layout Shift is one of Google's Core Web Vitals, along with Largest Contentful Paint and Interaction to Next Paint.
Custom data (RUM - All Browsers, SPA)
You can collect custom data in addition to the many out of the box metrics SpeedCurve provides. Custom data types include Conversions, Dimensions, Metrics, and Metadata.
You can use the JS RUM API, URL Patterns, Server-Timing headers, Element Timing, and User Timing to gather any data you want – for example, cart size, A/B testing, and conversion information.
Device Memory (RUM - Chrome, SPA)
Device memory reports device RAM in gigabytes. It is based on the Device Memory specification. As of September 2019 that specification is a draft and is only supported in Chrome and Opera.
DOM Content Loaded (Synthetic and RUM - All Browsers)
The DOM Content Loaded time is measured as the the time from the start of the initial navigation until the end of the DOMContentLoaded event.
Element Timing (RUM - Chrome)
Element Timing measures when elements are displayed. It is a draft specification supported by Chromium based browsers. See Hero Element Timing also.
First Contentful Paint (Synthetic and RUM - Chrome)
First Contentful Paint (FCP) is provided by browsers as part of the Paint Timing spec. It's the time at which users can start consuming page content. Specifically, it is "the time when the browser first rendered any text, image (including background images), non-white canvas or SVG." As of January 2019, this is available from Chrome, Opera and Android.
First CPU Idle (Synthetic and RUM - Chrome, SPA)
Formerly known as First Interactive. It changed to First CPU Idle in Lighthouse 3.0 to more clearly describe how it works.
First CPU Idle measures when a page is minimally interactive:
- Most, but maybe not all, UI elements on the screen are interactive.
- The page responds, on average, to most user input in a reasonable amount of time.
Specifically, it is the first span of 5 seconds where the browser main thread is never blocked for more than 50ms after First Contentful Paint. As of January 2019, this is available from Chrome, Opera and Android.
First Input Delay (RUM - All Browsers, SPA)
First Input Delay (FID) was developed by Google to capture how quickly websites respond to user interaction. It's fairly simple to implement: We add event handlers for click, mousedown, keydown, pointerdown, and touchstart. When the user first interacts with the page in one of those ways, we measure the time between when the event happened and when the event handler was actually called. That delta is FID. By combining FID, Long Tasks, and interaction metrics, you can get insight into how JavaScript on your page hogs the CPU and affects the user experience.
First Input Delay was previously one of Google's Core Web Vitals, replaced with Interaction to Next Paint in March 2024.
First Meaningful Paint (Synthetic - Chrome)
First Meaningful Paint (FMP) is "the paint after which the biggest above-the-fold layout change has happened, and web fonts have loaded." Chrome exposes this measurement as a "blink.user_timing" trace event with a name of "firstMeaningfulPaint". Note that this is not a W3C spec so is only available in Chrome.
Frontend (RUM - All Browsers)
The backend time is the time it takes the server to get the first byte back to the client. The frontend time is everything else. This includes obvious frontend phases like executing JavaScript and rendering the page. It also includes network time for downloading all the resources referenced in the page. Specifically it is loadEventStart minus responseStart from Navigation Timing.
Fully Loaded (Synthetic - All Browsers)
The Fully Loaded time is measured as the time from the start of the initial navigation until there was 5 seconds of no network activity after Document Complete. This will usually include any activity that is triggered by JavaScript after the main page loads.
Google Lighthouse Scores & Audits (Synthetic - Chrome)
For any synthetic tests run in Chrome we also run a Google Lighthouse audit which checks your page against rules for Performance, PWA, Best Practice and SEO. For each of the categories you get a score out of 100 and recommendations on what to fix which you can find on each test dashboard and aggregated together on the Improve dashboard. We run the latest release of Lighthouse.
Happy Page Views (RUM - All Browsers, SPA)
This is an aggregate metric that captures user happiness. Learn more.
Hero Rendering Times (Synthetic - All Browsers)
Hero Rendering Times are a set of synthetic metrics that are unique to SpeedCurve. They measure when a page's most important content finishes rendering in the browser.
Largest Image Render identifies when the largest image in the viewport finishes rendering. This metric is especially relevant to retail sites, where images on home, product, and campaign landing pages are critical elements.
Largest Background Image Render is for those pages where the background image is just as – or more – important than the largest image. We created this metric to ensure that you're not missing out.
H1 Render measures when your first H1 element finishes rendering. This metric is especially useful to media and informational sites. Because the H1 tag is usually wrapped around header copy, there's a reasonable assumption that this is copy you want your users to see quickly. If there are no H1 elements, then H2 is used.
First Painted Hero and Last Painted Hero are synthetic metrics that show you when the first and last pieces of critical content are painted in the browser. These are composite metrics for the Hero Rendering Times:
max(h1, (biggest_img || bg_img))
These composite metrics are computed by taking the minimum and maximum of the largest text time ("h1") and the biggest IMG time (or biggest background image if biggest IMG doesn't exist).
Hero Element Timing – which is based on the Hero Element Timing API – lets you select and annotate specific hero elements on your pages, such as search boxes, image carousels, and blocks of text. Right now, if you're a SpeedCurve user, you can follow the instructions in the API spec to annotate your pages, and see the results in your SpeedCurve results. (As a bonus, when browsers inevitably catch up and adopt the spec, you'll be ahead of the game.)
Learn more about how we identify hero rendering metrics, plus how to add element timing.
HTML Size (Synthetic and RUM - Chrome, Safari)
In RUM, this is the size of the HTML document based on the transferSize property from the Resource Timing spec.
Image ATF Requests (RUM - All Browsers, SPA)
This is the number of images that are in the browser's viewport.
Image Optimization Reduction Size (Synthetic)
This metric shows you how much smaller your images could be if they were all optimized using best practises. For example, an Image Optimization Reduction Size of 0K would mean that your images are as optimized as possible.
Inline Style/JS Size (Synthetic and RUM - All Browsers, SPA)
This is the uncompressed size of all inline the tags, i.e. does not count towards the size, whereas has a size of 14 bytes.
Interaction to Next Paint (RUM - Chrome, SPA)
Measures the time from when a user initiates an interaction until the next frame is painted. Only the following interactions are observed: mouse click, touchscreen tap, and a physical or onscreen key press.
INP has replaced First Input Delay (FID) as a core web vital as of March 2024.
Interaction to Next Paint is one of Google's Core Web Vitals, along with Largest Contentful Paint and Cumulative Layout Shift.
Interaction to Next Paint Subpart timings (RUM - Chrome, SPA)
INP - Input Delay
Input delay measures from the time that the user interaction started (when the input was received) until the event handler is able to run. Typically, anything competing for the main thread while the input is received can have an impact on input delay.
INP - Processing time
Processing time includes the time it takes event callbacks, initiated by the interaction, to complete. If time is spent here, look at optimizing event callbacks. As with Long Tasks, breaking things up into smaller chunks of work and deferring work not required for rendering is recommended.
INP - Presentation Delay
Presentation delay is the time it takes the frame to be presented after the event callbacks are completed. Issues with presentation delay are typically related to the complexity of the DOM and client side rendering of elements.
INP - Start Time
When did the interaction occur relative to navigation start. This is not technically a subpart of INP, but useful for correlating when the interaction occurs with other corresponding events, such as long tasks.
Interaction (IX) Metrics (RUM - All Browsers, SPA)
For many websites, how quickly users engage with the page is an important user experience metric. If these metrics increase it could be a sign that the page is rendering slower, or a design change has made the site more difficult for users. This category captures metrics about how and how quickly users engage with the page.
- First IX Type - The first type of interaction the user had with the page: scroll, click, or keypress.
- First IX Time - When the first interaction time occurred (relative to navigationStart). You'll see this reported as: Click Interaction Time (when the user first clicked anywhere on the page), Key Interaction Time (when the user first pressed any key on their keyboard), Scroll Interaction Time (when the user first scrolled), or just Interaction Time (any of the previous interactions).
- Element ID clicked - The ID or data-sctrack attribute of the DOM element that was clicked or keypressed. See the RUM data-sctrack API for more information.
JavaScript Errors (RUM - All Browsers, SPA)
In addition to performance metrics, RUM also collects JavaScript errors. In addition to collecting the error details (e.g., error message, filename, and line number), RUM reports two aggregate metrics:
- Total JS Errors is the total number of JavaScript errors collected during the corresponding time interval. Note that a single page can have multiple JS errors.
- JS Errors per 1K Pages - The total number of JS errors may fluctuate based on daily traffic patterns. JS Errors per 1K Pages is a more stable metric based on the number of errors as a percentage of page views.
Learn how to track and debug JavaScript errors using SpeedCurve RUM.
JavaScript Long Tasks (Synthetic and RUM - Chrome, SPA)
A JavaScript function that takes longer than 50ms is known as a long task, which often leads to page jank. Long tasks don’t always block rendering of the page. It’s more accurate to say they blocks the user experience and causes page jank. It also delays other metrics like First CPU Idle and Time To Interactive. of who might be causing real issues on your page.
JavaScript Number of Long Tasks (Synthetic and RUM - Chrome, SPA)
This is the number of JavaScript Long Tasks that occurred during the page loading.
JavaScript Longest Task (Synthetic and RUM - Chrome, SPA)
This is the time in milliseconds of the longest Javascript task on the page. Any task over 50ms starts to interfere with the user experience of a page. Any really long task over 300ms will cause a noticeable delay for your users. Use the Synthetic > Javascript dashboard to find out which script is causing the delay.
Largest Contentful Paint (Synthetic and RUM - Chrome, Firefox, SPA*)
Largest Contentful Paint is provided by browsers via the Largest Contentful Paint API. It's the time at which the largest element in the viewport is rendered. It's only tracked on certain elements, e.g., IMG and VIDEO (see more here). As of August 2019, this is available in Chrome 77+.
Largest Contentful Paint is one of Google's Core Web Vitals, along with Cumulative Layout Shift and Interaction to Next Paint.
*LCP may be present for a SPA navigation if the LCP candidate rendered during the transition is larger than the previous LCP. This is not considered a reliable metric for SPAs given that is it not a true 'continuous' metric.
Navigation Type (RUM - Chrome)
Navigation type indicates how the user navigated to the page, e.g., "normal" and "reload". It's based on Navigation Timing Level 2.
Page Load (Synthetic and RUM - All Browsers, SPA)
The Page Load time is measured as the time from the start of the initial navigation until the beginning of the window load event (onload). For SPAs using RUM, it's the time between LUX.init and LUX.send. While Page Load can be a useful metric it can also be deceiving as depending on how the page is constructed it doesn't always represent when content is rendered to screen and the user can interact with the page. Unfortunately many organizations and other monitoring tools still default to reporting Page Load as an important performance metric. It's in no way a good measure of the user's experience and something the industry needs to move on from.
Page Views (RUM - All Browsers, SPA)
In RUM, metrics are based on a "page view". In normal pages, the RUM data is sent as part of the page onload event. For Single Page Apps, a "page view" is defined by the time between LUX.init and LUX.send. Learn more about RUM and SPAs.
Redirect Time (RUM - All Browsers)
If there was a redirect as a result of the initial navigation, this is the time spent prior to the fetch of the requested document (i.e, redirectEnd - redirectStart from Navigation Timing). Redirect time is included in backend time along with, DNS lookups, TCP Connect and Server response time. Navigations with cross-origin redirects are not counted (returns 0)
Server Response Time (RUM All Browsers)
This is the time it takes from when the main HTML document is request to when the first byte is returned (i.e, responseStart - requestStart from Navigation Timing). This is different from backend time as it does not include redirects, DNS lookups, etc.
Server Timing (RUM - Chrome, Firefox, SPA)
Server Timing is a W3C specification which allows communication of data from the server to the client through the use of a server-timing header. This is a special header that is accessible through a JavaScript interface. Lux.js, SpeedCurve's RUM collection library, accesses this interface to collect custom data from the user agent.
The specification requires that header values are passed through a duration field (dur) and/or a description field (desc).
Example of a valid server-timing header entry:
server-timing: processing_time; dur=123.4; desc="Time to process request at origin"
Session Length (RUM - All Browsers, SPA)
Session length is the number of page views in a session.
Sessions (RUM - All Browsers, SPA)
Page views are grouped into sessions where all the page views are from the same browser and there is no more than 30 minutes between page views. In other words, if a user is visiting your website and stops for an hour lunch, when they return their page views will be grouped into a new session. In addition, sessions are limited to a 24 hour maximum, so after 24 hours a new session ID is created.
Speed Index (Synthetic)
Speed Index is the average time at which visible parts of the page are displayed. It's dependent on size of the view port. It represents how quickly the page rendered the user-visible content (lower is better). Speed Index is often the metric we show by default as it best represents the user's experience as the page rendered over time from starting completely blank to the viewport being visually complete.
Learn more about how Speed Index is measured and why it doesn't work for all pages.
Start Render (Synthetic and RUM - Chrome, Firefox)
The Start Render time is measured as the time from the start of the initial navigation until the first non-white content is painted to the browser display. Any CSS and blocking JS you have on the page has to be downloaded and parsed by the browser before it can render anything to screen. This is called the critical rendering path and the Start Render metric is very important in understanding how long users have to wait before anything is displayed on screen.
In RUM, Start Render is the value of First Paint from the Paint Timing spec. As of January 2019, this is not available in Safari and Mobile Safari, In synthetic, it's based on analyzing the tenth-of-a-second screenshots to see when rendering begins.
Time to First Byte
See Backend.
Time to First Interactive (Synthetic)
This metric has been renamed First CPU Idle in order to be consistent with Lighthouse 3.0. (Scroll up for definition.)
Time to Interactive (Synthetic)
Formerly known as Time to Consistently Interactive. Renamed to be consistent with Lighthouse 3.0.
Time to Interactive (TTI) measures how long it takes a page to become interactive. "Interactive" is defined as the point where:
- The page has displayed useful content, which is measured with First Contentful Paint.
- Event handlers are registered for most visible page elements.
- The page responds to user interactions within 50 milliseconds.
Specifically, it is the first span of 5 seconds where the browser main thread is never blocked for more than 50ms after First Contentful Paint with no more than 2 in-flight requests. In other words, it's the same as First CPU Idle but it also checks the number of in-flight network requests.
Total Blocking Time (Synthetic)
Total Blocking Time (TBT) is one of Google's recommended Core Web Vitals metrics for lab/synthetic testing. While it's great that Web Vitals includes a CPU metric, there are a couple of caveats you should be aware of if you're tracking Total Blocking Time (you can read more about those caveats here):
- Total Blocking Time only tracks the long tasks between First Contentful Paint (FCP) and Time To Interactive (TTI). For this reason, the name "Total Blocking Time" is a bit misleading. Any long tasks that happen before FCP and block the page from rendering are NOT included in Total Blocking Time. TBT is not in fact the "total" blocking time for the page. It's better to think of it as "blocking time after start render".
- TBT doesn't include the first 50ms of each long task. Instead, it reports just the time spent over the first 50ms. A user still had to wait for that first 50ms. It would be easier to interpret TBT if it included the first 50ms and better represented the full length of time a user was blocked by a long task.
User Timing (Synthetic and RUM - All Browsers, SPA)
User Timing allows you to measure the performance of specific page elements that you've identified as essential to the user experience for your own pages. The W3C User Timing spec provides an API for developers to add custom timing metrics to their web apps. This is done via two main functions:
- performance.mark records the time (in milliseconds) since navigationStart
- performance.measure records the delta between two marks
There are other User Timing APIs, but mark and measure are the main functions. SpeedCurve supports custom timing metrics, so once you've added marks and measures on your pages, you can collect data with both SpeedCurve RUM and Synthetic.
Visually Complete (Synthetic)
Visually Complete is the time at which all the content in the viewport has finished rendered and nothing changed in the viewport after that point as the page continued loading. It's a great measure of the user experience as the user should now see a full screen of content and be able to engage with the content of your site.
Updated 5 months ago