Running tests on your local machine will generally be very different from what you see in SpeedCurve. SpeedCurve – and the underlying open-source framework WebPagetest – are designed to give you a very consistent environment in which you can see the effect changes in code have on performance, but don't necessarily represent what all your users are experiencing.

SpeedCurve always loads websites: 

  • from a completely empty cache 
  • on consistent hardware 
  • within Amazon EC2 
  • at a throttled average connection speed. 

Chances are, that's a very different environment from your local computer, and therefore you get very different numbers.

What about Lighthouse?

SpeedCurve also runs Lighthouse tests for you. Lighthouse is run separately from your main SpeedCurve tests, and uses the "3G Fast" network throttling regardless of your SpeedCurve browser profile. Because of this, it's normal for Lighthouse to report slightly different numbers than your SpeedCurve dashboards.

Test hardware is a big differentiator.

We use the EC2 instances described here. WebPageTest is awesome, but its hardware is run by volunteers. Pat Meenan, the creator of WebPagetest, has no control over that hardware and doesn’t track what the hardware is. 

At SpeedCurve, we think it’s important to be able to compare results across all locations, so we use the same EC2 instance type across all locations. In order to do an apples-to-apples comparison, it’s important to match that CPU (as well as bandwidth, location, etc.). 

(If it's really important to you to see closer results between SpeedCurve and WPT, the closest you could come would be to try the EC2 locations available in WPT – but even there, the instance type is unknown, so it could be hard to compare. We’ve tested using a larger instance type, and we see a 25-50% speedup depending on CPU.)

It might be a bug.

If you're seeing a major discrepancy, there's always the chance that there may be a bug in the test. As already mentioned, SpeedCurve Synthetic is built on the back of WebPageTest. WebPageTest does its best to capture accurate measurements, but as you no doubt already know, browsers are highly dynamic environments. Browser changes can affect how measurements are captured. 

If you're not sure if what you're seeing is a bug or not, let us know. Email us at support@speedcurve.com with some screenshots of your charts, and we're happy to check things out.

If it's not a bug and probably more an issue of slightly different test environments, please remember this next point...

The most important thing to track is consistency and changes within a single testing tool and settings.

If you see changes in your test results within SpeedCurve or within WebPagetest, that's meaningful. If you see differences between the results you get for each tool, that's not especially meaningful (and as mentioned, is highly likely to be caused by things like testing hardware/location).

We always encourage people to look at how much improvement you're seeing at SpeedCurve as you improve your code base, rather than focus on absolute numbers. For example: Our start render is 38% faster than site X. Our Speed Index has dropped by 15% after the last deployment.

You should look to other tools, such as Real User Monitoring, for a broader overview of performance based on thousands of data points. Even then you won't get a single number for everybody, and it's important to represent the diversity of users' experiences across different devices, networks and geographies.

Here are a few links that might help give some more context...

Let us know if this helps explain the differences. We're planning to create some more articles and videos that help explain the landscape of performance testing, so don't hesitate to send more questions through.

Did this answer your question?