Running tests on your local machine will generally be very different from what you see in SpeedCurve. SpeedCurve – and the underlying open-source framework WebPagetest – are designed to give you a very consistent environment in which you can see the effect changes in code have on performance, but don't necessarily represent what all your users are experiencing.

SpeedCurve always loads websites: 

• from a completely empty cache
• on consistent hardware
• within Amazon EC2
• at a throttled average connection speed. 

Chances are, that's a very different environment from your local computer, and therefore you get very different numbers.

Test hardware is a big differentiator

We use the EC2 instances described here. WebPagetest is awesome, but its hardware is run by volunteers. Pat Meenan, the creator of WebPagetest, has no control over that hardware and doesn’t track what the hardware is. 

At SpeedCurve, we think it’s important to be able to compare results across all locations, so we use the same EC2 instance type across all locations. In order to do an apples-to-apples comparison, it’s important to match that CPU (as well as bandwidth, location, etc.). 

(If it's really important to you to see closer results between SpeedCurve and WPT, the closest you could come would be to try the EC2 locations available in WPT – but even there, the instance type is unknown, so it could be hard to compare. We’ve tested using a larger instance type, and we see a 25-50% speedup depending on CPU.)

The more important thing to track is consistency and changes within a single testing tool and settings

If you see changes in your test results within SpeedCurve or within WebPagetest, that's meaningful. If you see differences between the results you get for each tool, that's not especially meaningful (and as mentioned, is highly likely to be caused by things like testing hardware/location).

We always encourage people to look at how much improvement you're seeing at SpeedCurve as you improve your code base, rather than focus on absolute numbers. For example: Our start render is 38% faster than site X. Our Speed Index has dropped by 15% after the last deployment.

You should look to other tools, such as Real User Monitoring, for a broader overview of performance based on thousands of data points. Even then you won't get a single number for everybody, and it's important to represent the diversity of users' experiences across different devices, networks and geographies.

Here are a few links that might help give some more context...

How Speedcurve fits into your web performance toolkit
Synthetic vs Real User Monitoring

Let us know if this helps explain the differences. We're planning to create some more articles and videos that help explain the landscape of performance testing, so don't hesitate to send more questions through.

Did this answer your question?