Core Web Vitals is a Google initiative, launched in early 2020, that is intended to focus on measuring performance from a user-experience perspective. Core Web Vitals is (currently) a set of three metrics – Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift – that are intended to measure the loading, interactivity, and visual stability of a page.
- Largest Contentful Paint (LCP) – When the largest visual element on the page renders. LCP is measurable with both synthetic and real user monitoring (RUM).
- Total Blocking Time (TBT) – How responsive a page is. While Total Blocking Time is a Web Vital, it's not a Core Web Vital. It bears discussion here as it is a companion metric to FID. TBT measures the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI) when the main thread was blocked for long enough to prevent input responsiveness. TBT is measured in synthetic testing tools.
- Cumulative Layout Shift (CLS) – How visually stable a page is. CLS is a formula-based metric that takes into account how much a page's visual content shifts within the viewport, combined with the distance that those visual elements shifted. The human-friendly definition is that CLS helps you understand how likely a page is to deliver a janky, unpleasant experience to viewers. CLS can be measured with synthetic and RUM.
It's important to make the distinction between Core Web Vitals and Web Vitals. Core Web Vitals focus on user experience metrics. They are a subset of Web Vitals – including Time to First Byte and Total Blocking Time – that serve as supplemental metrics for diagnosing specific performance issues.
Explore your Web Vitals dashboard
Take a quick video tour of SpeedCurve's Vitals dashboard. Immediately see how your pages perform against Google's thresholds. Find out what you need to fix to improve your metrics.
Core Web Vitals are among the page experience signals that Google factors into search ranking, alongside mobile-friendliness, security, and absence of intrusive interstitials. Since Web Vitals were announced, they've shot to the top of many people's list of things to care about.
There is not yet a great deal of public-facing research that demonstrate a correlation between improved Core Web Vitals and improved search ranking. However, this case study from Sistrix suggests that pages that meet all Google's Web Vitals requirements rank slightly higher, while those that fail to meet requirements rank significantly worse.
Great performance isn't a substitute for poor content. When it comes to user experience, quality content is still king. Google acknowledges this as well:
"While page experience is important, Google still seeks to rank pages with the best information overall, even if the page experience is subpar. Great page experience doesn't override having great page content. However, in cases where there are many pages that may be similar in relevance, page experience can be much more important for visibility in Search."
Don't make pages faster solely for SEO purposes. You should make your pages leaner and faster because it makes your users happier and consumes less of their data, especially on mobile devices. Not surprisingly, happier users spend more time (and more money) on your site, are more likely to return, and are more likely to recommend your site to others.
How to create an SEO dashboard in SpeedCurve
You can create a custom dashboard to track Web Vitals in both Synthetic and RUM. Within your dashboard, you can create performance budgets for each metric, and get alerts when your budgets are exceeded.
Because Core Web Vitals focus on user experience, it stands to reason that improving Vitals will result in happier users who spend more time – and money – on your site. There have been some notable case studies that demonstrate this.
Vodafone: A 31% improvement in LCP increased sales by 8%
Vodafone conducted an A/B test that focused on optimizing Web Vitals. They found that a 31% improvement in LCP led to:
- 8% more sales
- 15% improvement in lead-to-visit rate
- 11% improvement in cart-to-visit rate
Swappie: Increased mobile revenue by 42% by focusing on Core Web Vitals
Swappie reduced LCP by 55%, CLS by 91%, and FID by 90%. As a result of these improvements, they saw a 42% increase in revenue from mobile visitors.
Agrofy: A 70% improvement in LCP correlated to a 76% reduction in load abandonment
Agrofy improved LCP by 70% and CLS by 72%, which correlated to a 76% reduction in load abandonment (from 3.8% to 0.9%).
How to create a correlation chart in SpeedCurve
Learn how to create charts that correlate performance metrics – such as Core Web vitals – with user engagement and business metrics.
Largest Contentful Paint measures when the largest visual element on the page renders. LCP is measurable with both synthetic and real user monitoring (RUM).
Common issues that can hurt LCP:
- Slow or blocking scripts and stylesheets that load at the beginning of the page's rendering path can delay when images start to render.
- Unoptimized images with excessive load times. LCP includes the entire time it takes for the image to finish rendering. If your image starts to render at the 1-second mark but takes 4 seconds to fully render, then your LCP time is 5 seconds.
To improve LCP time, you need to first understand the critical rendering path for your pages, and then identify the issues that are delaying your largest paint element. Synthetic monitoring can help identify which of the issues described above is the culprit.
For example, this synthetic test result for the Amazon home page shows that the LCP time is 3.63 seconds – in other words, outside Google's threshold of 2.5 seconds. You can also see that the LCP resource is actually a collection of images, not a single image:
The high-level waterfall chart for this page shows that the LCP event fires after many other key metrics, including Start Render:
Expanding the waterfall chart reveals a few things:
1. The HTML document has a Long Tasks time of 382 milliseconds. If you look closely below, you can see that it doesn't fully parse until 3.6 seconds. The red bars indicate all the Long Tasks for this resource.
2. There are 289 resources that are rendered before the LCP event, the bulk of which are images.
4. Many of the images in those JS bundles are outside the viewport. This means they could have been deferred/lazy-loaded.
- Request key hero image early
- Use srcset and efficient modern image formats
- Use compression
- Lazy-load offscreen images
- Set height and width dimensions for images
- Use CSS aspect-ratio or aspect-ratio boxes
- Avoid images that cause network congestion with critical CSS and JS
- Eliminate/reduce render-blocking resources
LCP doesn't always correlate to meaningful content in the viewport.
This is particularly noticeable on mobile. If you're serving a lot of mobile users, you may want to use synthetic testing to validate that LCP is a valid metric to track – and troubleshoot any issues that are preventing it from being measured correctly.
In SpeedCurve, all the performance optimization recommendations you see on your Vitals and Improve dashboards – as well as in your synthetic test details – are badged so you can see which Web Vital they affect. When you fix those issues, you should see improvements in your Vitals and Lighthouse scores.
A few technicalities about FID that you may care about:
- The user input required for FID is defined as a click/tap or key press. It doesn't include scroll or zoom.
- FID can be measured in traditional applications as well as SPAs.
- While there is now support for a native first-input performance entry type in Chrome, FID can still be measured across all modern browsers.
FID can only be measured in RUM. This makes it fairly simple to track, but difficult to investigate. We recommend measuring Total Blocking Time (TBT) in your synthetic monitoring, alongside measuring FID in RUM.
- Eliminate Long Tasks by breaking up long-running code into smaller async tasks
- Reduce overall number of resource requests
- Optimize third parties, e.g., defer "below-the-fold" and other non-essential scripts
- Minimize CPU main thread activity, e.g., use a web worker to run JS on a background thread
Total Blocking Time is a Web Vital, not a Core Web Vital, but it bears discussion here. TBT measures the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI) when the main thread was blocked for long enough to prevent input responsiveness. TBT is measured in synthetic testing tools.
There are a couple of caveats you should be aware of if you're tracking Total Blocking Time:
- Total Blocking Time only tracks the Long Tasks between First Contentful Paint (FCP) and Time to Interactive (TTI). For this reason, the name "Total Blocking Time" is a bit misleading. Any Long Tasks that happen before FCP and block the page from rendering are NOT included in Total Blocking Time. TBT is not in fact the "total" blocking time for the page. It's better to think of it as "blocking time after start render".
- TBT doesn't include the first 50ms of each Long Task. Instead, it reports just the time spent beyond the first 50ms. A user still had to wait for that first 50ms. It would be easier to interpret TBT if it included the first 50ms and better represented the full length of time a user was blocked by a long task.
We recommend tracking Long Tasks alongside TBT to identify all the Long Tasks from initial page navigation right through to fully loaded. Focus on the Long Tasks metric to get a full understanding of the impact Long Tasks have on the whole page load and your users.
If your charts suffer from a red rash of Long Tasks, many of the same solutions that apply to FID can also be applied here, including:
- Code splitting
- Evaluate code in your idle periods (using Philip Walton's Idle Until Urgent)
More: How to improve TBT
Cumulative Layout Shift measures the visual stability of a page. The human-friendly definition is that CLS helps you understand how likely a page is to deliver a janky, unpleasant experience to viewers.
CLS is a formula-based metric that takes into account how much a page's visual content shifts within the viewport, combined with the distance that those visual elements shifted. CLS can be measured with both synthetic and RUM.
One of the benefits of Cumulative Layout Shift is that it makes us think outside of the usual time-based metrics, and instead gets us thinking about the other subtle ways that unoptimized page elements can degrade the user experience.
CLS is strongly affected by the number of resources on the page, and by how and when those resources are served. If your CLS score is poor, some of the biggest culprits are:
- Web fonts – There can a significant discrepancy between the sizes of the default and custom fonts, which creates layout shifts. While it's good practise to not hide your content while waiting for a web font to load, it can negatively impact your CLS score if the web font then moves an element when it renders.
- Opacity changes – CLS doesn't take into account opacity changes, so adding an element with opacity 0 and then moving it will affect your CLS score.
- Ads – They can cause the entire editorial body of the page to shift. The size of the shifting element really matters when it comes to calculating CLS.
- Carousels – A surprising number of carousels use non-composited animations that can contribute to CLS issues. On pages with autoplaying carousels, this has the potential to cause infinite layout shifts.
- Infinite scroll – Some implementations can cause layout shifts.
- Images – Slow-loading images (e.g. large images or images on slow connections) can cause shifts if they load after the rest of the page has already rendered.
- Banners and other notices – These can cause other page elements to shift if they render after the rest of the page.
One of the big challenges with CLS is understanding which elements actually moved on the page, when they moved, and by how much. To help with debugging your CLS scores, SpeedCurve includes visualizations that show each layout shift and how each individual shift adds up to the final cumulative metric.
For each layout shift, you see the filmstrip frame right before and right after the shift occurs. The red box highlights the elements that moved, so you can see exactly which elements caused the shift. The Layout Shift Score for each shift also helps you understand the impact of that shift and how it adds to the cumulative score.
Visualizing each layout shift can help you spot issues with the way your page is being rendered. Here are some sample issues from analyzing layout shifts on two pages:
The size of the shifting element matters
Some layout shifts can be quite hard to spot when looking at just the filmstrip or a video of a page loading. In the example below, the main content of The Irish Times page only moves a small amount, but because of its large size the Layout Shift Score is quite high, adding 0.114 to the cumulative score.
Image carousels can generate false positives
The Amazon home page below uses an image carousel to slide a number of promotions across the page. While the user experience is fine, CLS gives this a poor score as the layout shift analysis only looks at how elements move on the page. In this scenario, you could avoid a poor CLS score by using CSS transform to animate any elements.
When tracking CLS, keep in mind that your results may vary depending on how your pages are built, which measurement tools you use, and whether you're looking at RUM or synthetic data. If you use both synthetic and RUM monitoring:
- Use your RUM data for your source of truth. Set your performance budgets and provide reporting with this data. Expect RUM and CrUX data to become more aligned over time.
- Use synthetic data to visually identify where shifts are happening and improve from there. Focus on the largest layouts first. Some shifts are so small that you may not want to bother chasing them.
How to diagnose CLS issues in SpeedCurve
This short video shows how to quickly diagnose problematic layout shifts for your site.
- Set height and width dimensions on images and videos
- Use CSS aspect-ratio or aspect-ratio boxes
- Avoid images that cause network congestion with critical CSS and JS
- Match the size of default fonts and rendered web fonts, or reserve space for the final rendered text so that layout changes don't ripple through the DOM
- Use CSS transform to animate any elements in image carousels
- Avoid inserting dynamic content (e.g., banners, pop-ups) above existing content
Web Vitals patterns
There are so many common page elements – such as carousels, banners, videos, and custom fonts – that can have a serious negative impact on your performance metrics. This is a handy collection of common UX patterns – including code examples that you can use on your own pages – that have been optimized for Core Web Vitals.
While Interaction to Next Paint is not a Core Web Vital, it's currently in the experimental stage as a potential lead-up to being added, so it bears mentioning here. (Note the name change: Previously, the working title for INP was Responsiveness.)
INP measures a page's responsiveness to individual user interactions. According to the Chrome dev team:
"INP is a metric that aims to represent a page's overall interaction latency by selecting one of the single longest interactions that occur when a user visits a page. For pages with less than 50 interactions in total, INP is the interaction with the worst latency. For pages with many interactions, INP is most often the 98th percentile of interaction latency."
"INP logs the latency of all interactions throughout the entire page lifecycle. The highest value of those interactions – or close to the highest for pages with many interactions – is recorded as the page's INP. A low INP ensures that the page will be reliably responsive at all times."
Because INP tracks real user actions, it should ideally be measured in RUM, though it can be emulated with synthetic tools.
INP is most commonly affected by the same issues that can hurt FID, therefore many of the same solutions can be applied. It's also worth noting that Total Blocking Time (TBT) is an excellent proxy for INP, so it can be used to track and fix INP issues.
How to create performance budgets for Core Web Vitals
This short video shows how to set performance budgets for Core Web Vitals, so you get alerts when they perform poorly.
Updated over 1 year ago