SpeedCurve Synthetic vs other tools
Here's why metrics might be different. But remember: the most important thing to track is consistency and changes within a single testing tool and settings.
Running tests on your local machine will generally be very different from what you see in SpeedCurve Synthetic. Synthetic monitoring is designed to give you a very consistent environment in which you can see the effect changes in code have on performance, but don't necessarily represent what all your users are experiencing.
SpeedCurve Synthetic always loads websites:
- from a completely empty cache
- on consistent hardware
- within Amazon EC2
- at a throttled average connection speed.
Chances are, that's a very different environment from your local computer, and therefore you get very different numbers.
Test hardware is a big differentiator.
We use the EC2 instances described here.
We think it’s important to be able to compare results across all locations, so we use the same EC2 instance type across all locations. In order to do an apples-to-apples comparison, it’s important to match that CPU (as well as bandwidth, location, etc.).
It might be a bug.
If you're seeing a major discrepancy, there's always the chance that there may be a bug in the test. We do our best to capture accurate measurements, but as you no doubt already know, browsers are highly dynamic environments. Browser changes can affect how measurements are captured.
If you're not sure if what you're seeing is a bug or not, let us know. Email us at [email protected] with some screenshots of your charts, and we're happy to check things out.
If it's not a bug and probably more an issue of slightly different test environments, please remember this next point...
The most important thing to track is consistency and changes within a single testing tool and settings.
If you see changes in your test results within SpeedCurve, that's meaningful. If you see differences between the results you get for different tools, that's not especially meaningful (and as mentioned, is highly likely to be caused by things like testing hardware/location).
We always encourage people to look at how much improvement you're seeing at SpeedCurve as you improve your code base, rather than focus on absolute numbers. For example: Our start render is 38% faster than site X. Our Speed Index has dropped by 15% after the last deployment.
You should look at your real user monitoring (RUM) data for a broader overview of performance based on thousands of data points. Even then you won't get a single number for everybody, and it's important to represent the diversity of users' experiences across different devices, networks and geographies.
Updated about 1 year ago