Everything about what the top comment said seems like Fastly marketing propaganda.
7% increase in sales? With what sample size? With what confidence? How did you isolate variables?
For reference, I work at a company with $110M in annual sales. We were planning to start using Fastly. For obvious reasons, we wanted to know how much of a net positive that would be for the business. So we wanted to A/B test it. For us, at least, it's not as straight forward as it would seem.
You can Google my name if you think I'm working for Fastly. The only connection I have is that they are our CDN. They donate it, as we're a nonprofit, but I have zero obligation to say anything about them either way.
7% increase represents an increase in the number of students/parents purchasing tickets to our events. (It's not like they're spending more individually, but more people have been willing to give us money.)
That's based on several years with no substantial change in the way we market or the design of our site.
About a 1-2% difference per 100 ms seems well supported (e.g. [1]), at least if your page load time is already low enough. 7% is very high, but some effect is expected.
I don't buy it. So what you are saying is 2% for every 100ms. So that means if we decrease page load time by 1ms, then we should expect to to see .02% increase in sales?
So if we do $10m annually, 1ms decrease in sales should boost sales by $2,000!
We would all probably agree that 1ms will make no statistical difference.
The problem with these studies is that most are dealing with much longer load times. Like 3 seconds vs 19 seconds! Obviously, that will make a HUGE difference. You can't then extrapolate that down to the millisecond.
The other problem is that many of these studies are basing their numbers on average load times. So they are comparing two groups and averaging load time. Group A averages 100ms faster than group B. And group B increased sales by 2%.
But what really happened is group A had 800ms load times across the board. And group be had 800ms load times for 98% of their page loads and 20,000ms on the remaining 2%.
So working with averages can be largely misleading.
I can't see the details of the 1 study that claims 100ms increments, but I'm very skeptical.
Yes, the 1ms will likely be statistically insignificant. But so are $2000 extra income out of your $10m. If anything, you have a better hope measuring the 1ms than the $2000.
I agree with your larger point that average latency is not a good measure. Even for the perception of a single user consistency is more important than a good average. For large groups the average is even less useful.
But intuitively I see lots of places where 100ms makes a world of difference. Just like there's a big, very perceptible difference between a 200ms animation and a 300ms animation, a time to full render of 200ms can change your experience compared to 300ms. The slower the page load, the more deliberate your movement. The closer you get to 16ms (1 frame) for a page load the smaller the investment for clicking a link, and the higher the willingness to experiment and explore. Some of that inevitably leads to conversions and sales.
I’ve seen the data and it’s true, but non-linear. Going from 19s to 3s is going help much less than going from 3s to 1s. The curve is almost entirely in the under 3s part.
I agree it may make some difference... but I don't think it can all be attributed to a CDN change. Maybe they moved some buttons around the same time as the CDN update. Who knows.
> 7% increase represents an increase in the number of students/parents purchasing tickets to our events. (It's not like they're spending more individually, but more people have been willing to give us money.)
We got that part, but correlation doesn't always imply causation. Maybe that change was seasonal or just plain old growth not accounted for in your 7%?
7% increase in sales? With what sample size? With what confidence? How did you isolate variables?
For reference, I work at a company with $110M in annual sales. We were planning to start using Fastly. For obvious reasons, we wanted to know how much of a net positive that would be for the business. So we wanted to A/B test it. For us, at least, it's not as straight forward as it would seem.
reply