Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Fastly S-1 (www.sec.gov) similar stories update story
263.0 points by directionless | karma 352 | avg karma 6.4 2019-04-19 12:07:10+00:00 | hide | past | favorite | 181 comments



view as:

Is Fastly a competitor to Akamai?

Yes, and to Cloudflare.

And to MaxCDN, another second tier US competitor, like Fastly.

MaxCND (awesome folks!!) got acquired by StackPath.

aww :-) <high five>

Under the "Risks" section they specifically mention their competitors:

The market for cloud computing platforms, particularly enterprise grade products, is highly fragmented, competitive, and constantly evolving. With the introduction of new technologies and market entrants, we expect that the competitive environment in which we compete will remain intense going forward. Legacy CDNs, such as Akamai, Limelight, EdgeCast (part of Verizon Digital Media), Level3, and Imperva, and small business-focused CDNs, such as Cloudflare, InStart, StackPath, and Section.io, offer products that compete with ours.


"small business-focused CDNs, such as Cloudflare"

That's an interesting statement. Supposedly, 10% of web requests on the internet route through Cloudflare. And I imagine their free tier has plenty of non-business use.


They may be distinguishing them based on their type of customer, rather than their quantity of traffic. Other CDNs may tend to exclusively pursue much larger customers like big banks, governments, etc., while Cloudflare from my understanding is quite happy to serve the smaller market segment.

Ahh, yes, I read it as "small, business-focused CDN" though there's no comma there. You're right..they meant focused on "small business." I suppose the dash threw me off.

Fascinating, that’s because they did the dash wrong for the compound adjective.

It should have been the ugly but correct “small-business-focused CDN”. If they didn’t want to double hyphenate then “small-business focused” would have been read properly by everyone. But you definitely parsed their “small[,] business-focused” correctly :).


Makes me appreciate the importance of grammar a bit more. I assumed I was reading it wrong. Thanks for sharing the detail.

I haven't looked at any other details, but Googling Fastly says this:

"Fastly, Inc. is an American cloud computing services provider. Fastly's edge cloud platform provides a content delivery network, Internet security services, load balancing, and video & streaming services"

Just judging by that statement, I feel they're going to be eaten by AWS, GCP, and maybe Azure. However, it seems their focus may be on creating a viable business, rather than trying to spin an open-source project into a business (e.g., Docker Cloud). We already see that Docker is losing business to people using the OSS, but paying Amazon or Google for ECR and GCR, respectively.

That being said, there are some smaller somewhat related players, such as PagerDuty that seem to be off to a good start. Twilio's stock has performed well historically, too, as has Splunk. But these later companies seem to be solving problems that make them less direct competitors with the bigger players.


They are more like Cloudflare or Akamai

I think I would invest in Cloudflare before this, but that'd be based on name recognition.

Be aware though that name recognition in the enterprise space is very different from name recognition for small businesses and startups. Fastly focuses on the enterprise space, so having no name recognition with startups isn't really relevant to them.

True, it certainly highlights the "downmarket" strategy of Cloudflare (aiming for small to medium businesses).

They are in the CDN/Edge space. Akamai and Cloudflare are more direct comparisons.

AWS, GCP, and Azure have CDNs as well, but those are more typically used by existing customers to front things running there.


Fastly also has more "points of presence" than AWS/GCP

Fastly has deployments in 17 north american cities. AWS is in 5.

They're pretty different strategies. For every massive AWS datacenter on a continent, fastly has at least 3 tiny ones -- inevitably closer to your customers, meaning lower RTT.


They serve a small number of relatively high revenue opportunity use cases (SMB/Mid-Enterprise). If you use a lot of specialized CDN and want solutions engineers to work with, know you need it but don't have the capability to do it well in house, etc, "hosted" companies like Fastly, Aiven, Elastic make a lot of sense (things that are easy till they're not). My biggest red flag for fastly would be: cloudflare.

Why don't you read the filing? It covers the competitive landscape.

Competition

Our platform spans several markets from cloud computing and cloud security to CDNs. We segment the competitive landscape into four key categories:

•Legacy CDNs like Akamai, Limelight, EdgeCast (part of Verizon Digital Media), Level3, and Imperva (for security); •Small business focused CDNs like InStart, Cloudflare, StackPath, and Section.io; •Cloud providers who are starting to offer compute functionality at the edge like Amazon’s CloudFront, AWS Lambda, and Google Cloud Platform; and •Traditional data center and appliance vendors like F5, Citrix, A10 Networks, Cisco, Imperva, Radware, and Arbor, as well as networks that offer a range of on-premise solutions for load balancing, WAF, and DDoS.


I can't say I'm impressed by Akamai, but then again, Zoom came out of nowhere and is likely to clean house in the B2B video conferencing space.

Maybe Fastly will give legacy players a run for their money.


I'm using Firebase Hosting (by Google) and the IP address resolves to Fastly, at least in the EU.

That's pretty odd. Why would Google use a 3rd party to front their own service?

They acquired Firebase. On Firebase Hosting for apex domains one needs to add Fastly's Anycast IPs. Switching CDNs would be a nightmare.

One related gripe I have with Firebase Hosting is that it refuses to renew certificates if you also add IPv6 addresses.


I think firebase is using fastly before they got acquired by google. There was a post on HN describing their move from couldfront to Fastly.

https://news.ycombinator.com/item?id=4314209


Sure, but the acquisition was in 2014. I assumed they would want to serve it from their own global infrastructure. Perhaps they have a great deal with Fastly.

> We generated a net loss of $30.9 million for the year ended December 31, 2018, and as of December 31, 2018, we had an accumulated deficit of $146.2 million.

Wow, I did not think an "enterprise-y" company like Fastly could be burning that much cash on growth!


It says something about the industry when "only" a $30.9M annual burn rate sounds small.

It is quite small though compared to potential profits.

The $30M is 20% their annual revenue, or 18% of their current assets. That doesn't seem out of line for a growing business, and is something you can do without Venture Capital.

It’s important to note that this is on an earnings basis. Looking at their cash flows presents a slightly better view of operations, with an outflow of 16M, though some would argue their stock based comp of 4M should be included. But it also shows a huge capex expense of almost 20M. This is clearly not just a software business and must be considered like a business that requires real PP&E.

It's also interesting that the $30M figure is approx their R&D spend! Underscores the importance of offsetting those startup costs with policies that promote public-private partnerships and other subsidies for innovation ;)

Why? Fastly will make a couple people very rich, and they seem to be doing fine without subsidies. Why should public funds go toward concentrating wealth even further?

Well, tech transfer can be a slow process. Primarily because of legal formalities around indemnification, liability, etc. You also have the common scenario where many competitors are working in the same problem domains. Only to find after years of effort that they have independently arrived at eerily similar solutions!

Can you explain to me how the last sentence speaks to the need for subsidies?

I love the people behind Fastly. Congrats to Artur and the crew :-)

The whole "edge compute" space seems poised to do well to me. Cloudflare's edge KV store and edge server-side JavaScript are obvious, but great ideas.

The wildcard seems to be the companies that own cell towers. If they build a credible edge offering, that's a moat that is hard to beat.


Edge facilities are warehouses in regional locations with excellent backbone connectivity, basically your modern datacenter. Cellphone towers can probably host a few racks, that's not a profitable business and its not "internet scale". If the regional datacenter has a 15ms ping to each tower in the region, then you have pretty good coverage.

I think you underestimate what the edge will become. There are already startups trying to store your data at your house, cellphone tower, isp, etc. In ways where there is no central store or in ways that everything is eventually consistent. Computing at the edge is a very interesting topic.

Storing my data at my house makes sense, especially if upkeep of the box is relegated to the end user. Storing my data at the tower in my neighborhood, instead of a regional center seems to be a large increase in maintenance cost for a minimal decrease in latency.

Accessing the tower is expensive in time, and equipment that runs at the tower is exposed to a wider variety of temperatures and RF stress than in a nice warehouse somewhere in the metro area.

It's possible the right caching at towers could reduce the backhaul bandwidth requirements, but seems iffy.


Why is storage at a (space-limited) cell tower more interesting than storage/compute at the ISP or packet core (or whatever's at the other end of the backhaul)?

How much latency do you think is incurred between the ISP and cell tower?


For an edge to work, you need security. That means transport encryption which requires certs. I think this fact alone will keep the edge at modern secured datacenters. There is limited physical security at cell sites. Even less at the users home. This could mean there is no more transport encryption for this kind of edge. Or even worse, private key loss.

Not saying no here, just pointing out a very large concern.


Have you taken a look at the Cloudflare Keyless SSL tech?

Just did and I think it furthers my point. Cloudflare now owns your key and the edge is now their network.

My concern is cert/key management where the edge is somewhere you have very little control over, like a cell tower, random building network, or a users house. Even with keyless, once that device is in my home, Im pretty sure that entire thing can be reverse engineered. Not easy, like probes and oscilloscopes on exposed leads hard, but physical access is pretty much game over, no?

I've worked in this space and the solution is detection and mitigation. Limit the damage to single devices, workflow the user in, look for human attack patterns. Defense is futile.


The point is that the key is never in the possession of the edge (i.e. Cloudflare). There is no way the edge could recover the key. They can use it to sign whatever they want, while you allow them to, although you can take whatever auditing measures you'd like there.

>The whole "edge compute" space seems poised to do well to me.

IMHO, I think it's mostly a solution in search of a problem. The internet backbone is fast enough to not be noticeable for end users. A centrally located server in the U.S. will have a maximum ping of 40 ms to anywhere in the U.S. That's faster than is perceptible to the end user. The only mainstream usage I can see is cloud gaming where ping is that critical.


It may not make sense if all your customers are in the US but locations like Australia and others very much have an issue with latency. Especially if the server they are hitting is around the globe.

NYC to Sydney is still only ~200 ms. That's probably in the realm of being noticeable but only barely. Even then, the real need is more like one server per continent rather than every cell tower.

200ms is human reaction time, you can definitely notice it; the rule of thumb is 100ms https://www.pubnub.com/blog/how-fast-is-realtime-human-perce...

Your cite says that < 1s is "for users to feel they are interacting freely with the information". Even a 400ms round trip is well within that limit.

Most, if not all websites require more than one round trip.

HTTP2 can reduce this significantly.

That statement is categorically false. One simple example is online gaming/MMOGs where a latency of 200 ms would be be a miserable experience. The upper bound often targeted for "acceptable interactivity" is somewhere between 100-130ms.

(1) I excluded gaming as something that is legitimately ping dependent

(2) You can achieve 100-130 ms latency with a "like one data center per continent" sort of network. You don't need edge computing on hundreds or thousands of edge sites to achieve that.


I'm sorry but what are you talking about? Fastly has about 60 PoPs (physical locations). No one is talking about hundreds of thousands of sites. Even akamai, who made it one of their primary marketing metrics, only ever got into the thousands.

The GP is talking about putting them in cell phone towers.

Your numbers are wrong but your line of reasoning is actually the basis of how Fastly's network was designed to be different from Akamai's. There's a graph about it at the 2:40 mark in this video: https://vimeo.com/132842124

That presentation was refreshingly honest and interesting to boot with.

See also: Page takes 2500ms to render javascript.

That's only good for small media like websites and images. Videos and downloads need to be cached in the provider networks, you simply cannot serve terrabits of data from a single origin. Well, you can, but networks don't like to operate like that. So ya, at scale, you need edges. The internet is dominated by companies at scale.

That's more of a CDN than edge compute. Edge compute would be something like having your Rails app run in a bunch of edge locations and then doing some eventually consistent magic to sync them up.

>IMHO, I think it's mostly a solution in search of a problem.

I agree, I've been looking at discussion one here recently on how someone was working on the problem of "data storage at the edge" to compliment their "compute at the edge" offering. I think you can argue that that when you move compute and data storage to the edge you've effectively moved the data center to the edge. At some point if you put enough data at the edge you operating your own backbone makes economic sense and CDN don't maintain backbones they mostly buy paid peering and transit and try to leverage public peering wherever they can. I would even say edge compute is a solution in search of revenue stream and hedge in a heavily commoditized market.


https://hpbn.co/transport-layer-security-tls/#leverage-early...

Establishing a connection can require several round trips, and the latency adds up. "early termination" with a point of presence near your end user makes a pretty massive difference, especially if your user isn't on the same continent. Every additional 100ms costs amazon 1% of their revenue https://news.ycombinator.com/item?id=273900


Gaming is a bad example because it isn't the ping to the edge server that matters but ping to another user.

Interesting that just 10 customers make up over a third of revenue for them.

Wouldn't that be true for many companies, even AWS possibly?

Absolutely not, 33% revenue concentration, no matter what business you are in is NOT good from an investor perspective. Super risky.

suffice it to say: a bunch of AWS features were essentially developed for the benefit of netflix. So, while it entails some risk, ultimately, yes: it's not uncommon for a handful of customers to be the 900 lb gorillas

Judging from pages 4-5, I'm guessing these include NYT, New Relic, Ticketmaster, Alaska Airlines, Spotify, and Github.

They mention other cloud platforms as competition, and Azure has a CDN. I doubt Github would switch anytime in the near future, but the dangers posed to smaller companies by the consolidation under giants is interesting. What happens when your competitor doesn't just try to steal your clients, but can actually just acquire them?

They also mention one risk as their dependence on AWS, a competitor, and that if all the cloud providers blackballed them, they'd be in trouble.


I think if Amazon hasn't kicked a competitor like Netflix off of their platform, Fastly has little to worry about.

That's probably not the right analogy. Kicking someone off of your platform isn't the same as using your own platform instead of a competitors.

I'm sure Azure would welcome competitors to use their product - same way AWS welcomes Netflix. A good analogy would be Amazon switching from Oracle to AWS based solutions. In which case, they do have something to worry about.


With regard to their "what if all the cloud providers block us" issue, it's is a good indication that they're fine. (Page 30 "We rely on third-party hosting providers that may be difficult to replace.") I don't think that's a serious risk.

For the Github thing, you're right, it's a different situation.


Those accounts aren't the big fish. Well, github possibly is. Websites are tiny, especially when your revenue is based per byte. The money is in video and downloads. Getting a fraction of Apple's or Microsoft's CDN spend for downloads is probably worth tens of millions.

Customers highlighted on their homepage: Alaska Air, Ticketmaster, Vimeo, Airbnb, Pinterest, NYT, Twitter, and Buzzfeed. I guess that leaves, New Relic and Github from their S1.

Though they also have several media heavy companies besides Vimeo highlighted in their case studies. Including: Wistia, A&E, iHeartMedia, Shazam, 7Digital, and FuboTV


From 2017 to 2018, they added 147 new paying customers. Of which, 57 were enterprise and based on their metrics claiming > 80% of revenue was from these "enterprise" deals, probably high dollar.

Still, ~$50m in marketing/advertising spend to earn 147 new paying customers (340k/customer) seems high to my untrained eye. Do those enterprise deals and that "132% Dollar Expansion Rate" justify such high CPA?

Those with experience with this kind of enterprise-focused company - is this normal?


Those enterprises are re “-occurring revenue, so I am sure their LTVs are factored in with the spend.

Depends. If you don't get the "whole enterprise" the part of the enterprise you got could be easily lost because of a top down "everybody is going to use X" or "why do we have Y different vendors for Z?" type of situations.

Sure, but that's factored into LTV calculations. If the average customer lasts 5 years before they churn because "Everyone is using X", then you get 5 years of recurring rev off that marketing spend.

And their biggest enterprise customers are paying fastly at least 6 figures annually

>~$50m in marketing/advertising spend to earn 147 new paying customers (340k/customer) seems high

Without knowing the ROI it's really impossible to know if this is high or not.

Spending $1 to make $2 is always the right decision.


> Spending $1 to make $2 is always the right decision.

Unless you have an opportunity to spend $1 and make $3. Your supply of dollar bills is still limited. It's hard to spend huge amounts of money. You've always got to consider where the greatest leverage exists.


Wait. What? 147 paying customers? I was personally one of those customers, paying like $100 a month, mostly just as a test. They really only added 147 paying customers? How many of them were just paying $100 a month for the base price?

It appears to be a misreading.

> We had 1,582 customers and 227 enterprise customers as of December 31, 2018. This is an increase of 143, or 10%, in customers and 57, or 33%, in enterprise customers from December 31, 2017.

Page 72.

The same figure appears again in the table on page 76.


I highly doubt if they counted you as an enterprise customer. Enterprise customers have significantly large multi-year contracts. I have worked in sales org before and we reported only contracts which fetched ~ 100k/year as enterprise customer. 10k to 100k were reported as SMB customers. These numbers may be different for Fastly but I doubt $100/month customers are called Enterprise customers.

Good luck getting 100k/year contracts in CDN and at a second grade provider.

The market has been commoditized by CloudFlare. CloudFlare is charging $200/month for the business plan, or $2000/month for the enterprise plan with everything.

There can be banks or government paying 10 times more for custom plans (extensive support and long cycle cycles). These would never adopt Fastly.

Akamai can get away with charging millions of dollars to some historic customers who really don't need the service. If they ever migrate away, that's explicitly to take a zero off the bill.


147%

147% not 147 individual customers for anyone else confused.

147% appears to be the Dollar-Based Net Expansion rate for the 3 months ending 31st Dec 2017.

It drops to 132% by 31st Dec 2018. Still a quite respectable figure.


Interesting that Brexit uncertainty is in their risk factors:

> These developments, or the perception that any of them could occur, have had and may continue to have a significant adverse effect on global economic conditions and the stability of global financial markets, and could significantly reduce global market liquidity and limit the ability of key market participants to operate in certain financial markets. In particular, it could also lead to a period of considerable uncertainty in relation to the UK financial and banking markets, as well as on the regulatory process in Europe. Asset valuations, currency exchange rates, and credit ratings may also be subject to increased market volatility.


basically legally required for anyone doing business in europe/UK to add that boilerplate

via https://twitter.com/justincormack/status/1119217911380545536 interesting to see some technical detail in the S1 including a likely reference to WASM, WASI and Lucet [0].

"Moreover, our platform is highly technical and complex and relies on the Varnish Configuration Language (VCL). Potential developers may be unfamiliar or opposed to working with VCL and therefore decide to not adopt our platform, which may harm our business."

"We will continue to work on open source projects, which will empower developers to build applications in multiple languages, and run them faster and more securely at our edge"

[0] https://github.com/fastly/lucet/


I was tasked to integrate Fastly into our infrastructure having not done any configuration with Varnish (VCL) before. ~10k req/s

VCL can be challenging for complex flow control (IMO), but it is made easier with Fastly enhanced/custom VCL modules.

Their documentation was good, and the Fastly support team was excellent. Their sales engineers gave us a baseline configuration that suited our needs and were quick to answer any followup questions.

Haven't noticed any downtime or response delays to date.


Check out this flow diagram for Varnish 2 [1].

[1] http://www.kalenyuk.com.ua/wp-content/uploads/2009/12/varnis...


/* Work for a Media Company */

We're looking to move one of our sites off Akamai mostly for costs reasons. Fastly configuration in Varnish VCL - Senior Engineers from my company highly rate Varnish as a cache software so Fastly was an easy choice.

I believe Akamai have been our CDN from the start. The amount of reconfiguration we'd need to do to get everything off would create a huge number of tickets in our work queue. The primary advantage of Akamai has been the number of datacenters they have to service our traffic. A customer can be in remote Australia and have their packets cached in a datacenter in the nearby telephone exchange. That's the reach of Akamai that AWS and Fastly can't compete with.

Their WAF and Bot detection products are also very good. They are a definitely an enterprise/full service CDN provider. I definitely wouldn't call them 'legacy' by any means but the type of service they provide is so different to a new player like Fastly.


I'm curious how Cloudflare would compare? Did you evaluate their products?

Curious too. I may be bias, but I genuinely like Cloudflare and all there products, because they simply work and work good.

I'm a fan for personal use but from what I saw, you need to switch your DNS over to them before you can begin to use their CDN. Unsure if this applies to their enterprise product...

I'm not sure why it wasn't given more attention. Their version of lambda/serverless looks interesting


Fastly’s major advantage is near instant CDN config changes, where Akamai can take an hour to push a new cdn config.

Is that an actual advantage though?

Yes.

Yes, it is an amazing advantage, especially paired with configuration being a VCL.

This is dated. Akamai pushes configs out in minutes now

Not for all parameters. I pushed out a origin change on Akamai this year and it took about 75 minutes.

Definitely true - their instant deploys are very impressive from an operations point of view.

Small changes only take a few minutes to roll out on Akamai's network. Another commenter talked about Fastly's POP count. Akamai has far more as can be seen here: https://www.akamai.com/uk/en/solutions/intelligent-platform/...

No doubt having so many POPs (some probably legacy and slower to update) likely extend the update time.


Fastly has 4 PoPs in Australia and two in New Zealand, you can see their reach in the map on their site: https://www.fastly.com/network-map

Also, if you're a big enough deal (thousands/month+) you'd be surprised how much of that config re-write the fastly sales engineers can help you with.


May I ask how can I download this in PDF?

File > Print > Select 'Save as PDF' under Destination.

As a user, Fastly is really, really good. Akamai is the only serious competitor, but they are a lot more traditional in their sales and configuration. (You can theoretically use Azure as a middleman as some have pointed out, but they don't support custom SSL, and configuration changes take literally hours-to-days to propagate.)

Cloudflare is fine but they are still an order of magnitude slower, which is why we switched. I just tested and Cloudflare is still taking 48ms their own DNS server to resolve our Cloudflare-hosted DNS, but only 14ms for Fastly to establish a connection and send the first byte.

That speed does make a difference. When we switched from Cloudflare to Fastly we had about a 7% increase in completed sales.

I think their ROI is still too low for most small businesses, the only way that actually makes sense for us is because it's free for open source/nonprofits. But I can imagine it's a big deal for larger ones.


hi- curious about your completed sales stats. What are you selling? was it instant? was it gross top line?

We sell low-cost tickets to events to largely non-technical students. On the order of 10k sales a year, so we're nowhere near enterprise scale (which is also why we couldn't use Akamai). www.srnd.org if you'd like to learn more.

I find it difficult to believe that sales would be affected by a latency drop less than one tenth of a second.

How did you isolate the variables?


I am going to assume there is more to it then that. Even Amazon only says 100ms increments affect sales.

There was a difference of usable site time of about 500-600ms

No, this absolutely checks with my experience - I chase 5 and 10ms improvements all the time because we've measured and know it increases conversion.

But it makes sense, too: if the metric here is average latency, that doesn't mean that some users didn't see a much more dramatic increase. Every tiny bit of frustration removed from the experience adds up.


>No, this absolutely checks with my experience - I chase 5 and 10ms improvements all the time because we've measured and know it increases conversion.

You've measured and know that a user seeing a 905ms load time converts more than one seeing 915ms?


A 10ms average improvement could mean 1 in 1000 customers went from 10s to 1s without any other change to other customers.

This is easily possible if you have a highly distributed customer base, and/or some small segment of your customers don't have good upstream peering with your provider.


>A 10ms average improvement could mean 1 in 1000 customers went from 10s to 1s without any other change to other customers.

Which is why I asked the question in the way that I did. I buy that a slimmed down webpage loading 10 ms faster on average will increase conversions because that makes the site usable for the visitors on bad connections. Moving to a CDN doesn't have that impact. It shaves off 10-100ms across the board.


> Moving to a CDN doesn't have that impact. It shaves off 10-100ms across the board.

I think this is where we disagree. I've seen (firsthand and through analytics) situations where using a CDN can dramatically improve response time in a small subset of customers (while also getting the across the board win for most customers).

I've also seen CDNs (Amazon's in the early days) that were signficantly slower than direct to linode, even with a warm cache. It's a weird world, and packet routing is hard.


The latencies stack up for any web page. It's not like "oh this file was served 5ms sooner", it's an accumulation of latencies over all assets, and the interactions, that are required to present an experience to a user.

Also, latency measured at the server is amplified once its received by the browser. And when the user's connection isn't great, all this is worsened. It quickly adds up. In fact, it doesn't "adds" up, it "multiplies" up.


If you’re serving so many elements that 5-msec-per improvements really accumulate, your problem is page complexity, not marginal latency.

Most pages nowadays are spamming requests to backend services, for business logic or not, that's just reality. Barely any actual real world money making website is going to have < dozen requests per page, and won't need a user initiated page load every few seconds to minutes.

What kind of storage do you use when chasing the 5-10ms improvements? I ask as someone working on super fast storage and memory and it's not always easy to find people who are aware of the differences in latency. I imagine you're already on NVMe SSDs, but if not, I'd be curious why (i.e. is software more of the issue than hardware?).

I highly doubt your claims about Cloudflare being an order of magnitude slower, they are consistently one of the fastest CDNs:

https://www.cdnperf.com/

Top 15:

    1 Google Cloud CDN        36.70 ms 
    2 jsDelivr CDN            36.80 ms 
    3 Akamai CDN              38.00 ms 
    4 Verizon (Edgecast) CDN  38.30 ms 
    5 Azure CDN               38.50 ms 
    6 Fastly CDN              41.38 ms 
    7 Cloudflare CDN          41.80 ms 
    8 AWS CloudFront CDN      43.00 ms 
    9 CacheFly                43.90 ms 
    10 BunnyCDN               46.00 ms 
    11 StackPath CDN          46.53 ms 
    12 KeyCDN                 47.00 ms 
    13 CDN.NET                48.46 ms 
    14 G-Core Labs CDN        48.74 ms 
    15 Quantil CDN            50.20 ms

Everything about what the top comment said seems like Fastly marketing propaganda.

7% increase in sales? With what sample size? With what confidence? How did you isolate variables?

For reference, I work at a company with $110M in annual sales. We were planning to start using Fastly. For obvious reasons, we wanted to know how much of a net positive that would be for the business. So we wanted to A/B test it. For us, at least, it's not as straight forward as it would seem.


You can Google my name if you think I'm working for Fastly. The only connection I have is that they are our CDN. They donate it, as we're a nonprofit, but I have zero obligation to say anything about them either way.

7% increase represents an increase in the number of students/parents purchasing tickets to our events. (It's not like they're spending more individually, but more people have been willing to give us money.)

That's based on several years with no substantial change in the way we market or the design of our site.


How do you know the 7% increase was due to Fastly and not some other small unrelated change to your web site? A few 10 ms makes no difference.

> A few 10 ms makes no difference.

About a 1-2% difference per 100 ms seems well supported (e.g. [1]), at least if your page load time is already low enough. 7% is very high, but some effect is expected.

1: https://developers.google.com/web/fundamentals/performance/w...


I don't buy it. So what you are saying is 2% for every 100ms. So that means if we decrease page load time by 1ms, then we should expect to to see .02% increase in sales?

So if we do $10m annually, 1ms decrease in sales should boost sales by $2,000!

We would all probably agree that 1ms will make no statistical difference.

The problem with these studies is that most are dealing with much longer load times. Like 3 seconds vs 19 seconds! Obviously, that will make a HUGE difference. You can't then extrapolate that down to the millisecond.

The other problem is that many of these studies are basing their numbers on average load times. So they are comparing two groups and averaging load time. Group A averages 100ms faster than group B. And group B increased sales by 2%.

But what really happened is group A had 800ms load times across the board. And group be had 800ms load times for 98% of their page loads and 20,000ms on the remaining 2%.

So working with averages can be largely misleading.

I can't see the details of the 1 study that claims 100ms increments, but I'm very skeptical.


Yes, the 1ms will likely be statistically insignificant. But so are $2000 extra income out of your $10m. If anything, you have a better hope measuring the 1ms than the $2000.

I agree with your larger point that average latency is not a good measure. Even for the perception of a single user consistency is more important than a good average. For large groups the average is even less useful.

But intuitively I see lots of places where 100ms makes a world of difference. Just like there's a big, very perceptible difference between a 200ms animation and a 300ms animation, a time to full render of 200ms can change your experience compared to 300ms. The slower the page load, the more deliberate your movement. The closer you get to 16ms (1 frame) for a page load the smaller the investment for clicking a link, and the higher the willingness to experiment and explore. Some of that inevitably leads to conversions and sales.


I’ve seen the data and it’s true, but non-linear. Going from 19s to 3s is going help much less than going from 3s to 1s. The curve is almost entirely in the under 3s part.

I agree it may make some difference... but I don't think it can all be attributed to a CDN change. Maybe they moved some buttons around the same time as the CDN update. Who knows.

> 7% increase represents an increase in the number of students/parents purchasing tickets to our events. (It's not like they're spending more individually, but more people have been willing to give us money.)

We got that part, but correlation doesn't always imply causation. Maybe that change was seasonal or just plain old growth not accounted for in your 7%?


CDNPerf is an okay way to get an overview, but not a great way to actually compare CDNs.

That said, it’s not 10x or whatever. Maybe like 5-10% depending on location.


Do you know of a better way than CDNPerf? They do 300 million requests per day from random users from around the world.

Cedexis is also a good tool to measure, while most of it is a paid product, they have a few reports you can see. they also list cloud providers, and have other reports such as throughput, etc.

https://www.cedexis.com/get-the-data/country-report/


Is there any documentation on how they measure it? If they're running tests from data centers / peering points or testing things which are highly-likely to be cached the results are going to be harder to generalize to normal usage.

There are a number of features Fastly has which others (excluding eg Akmai) don't which make head-to-head comparisons more difficult. I have no doubt that Cloudflare is very fast in perfect conditions, but there were a much higher number of cache misses because there's less flexible configuration which did literally make the difference between 20ms vs 200ms in production.

Can you elaborate on this?

Enterprise customers are generally able to get their SE to set whatever cache key they like, and if that is insufficient they can use the worker cache API: https://developers.cloudflare.com/workers/reference/cache-ap...

As I understand it a charity may get the former thing for free through https://www.cloudflare.com/galileo/ , but the latter probably still has usage-based billing.

Disclosure: I work for Cloudflare.


Comparing CDNs is incorrect by an aggregator. I couldn't find CDNPref methodology. One patient goes to two specialist doctors and returns with two different assessments (observer error). The multitude of variations in Cloudflare and Fastly alone, make it hard to compare. Free plans should be removed, and maybe just compare Enterprise to Enterprise on the same price point.

DISCLAIMER: Akamai Product Manager here.

Deployment times for config changes have been operating at the sub-5m timeframe for at least two years. It used to take hours (never days) to propagate changes across the server estate, but not anymore. And while we were admittedly late to the DevOps train, we have made up lost ground. We have nearly 100 individual APIs to control almost any aspect of our products. CLIs if you don't want to write to the API. A Sandbox to test config changes locally. The ability to validate OAuth tokens at the Edge, cache GraphQL responses, and throttle and/or quota API traffic on a global basis. Hashtag "legacy CDN."


What's the purge time like?

Less than 5s across all 240K+ global servers

I'm not talking about Akamai directly, I'm talking about the Azure interface, which is as close to Akamai as most HN commenters are likely to get. Akamai itself is great but the Azure interface makes you (and VZN) look terrible

> Deployment times for config changes have been operating at the sub-5m timeframe for at least two years.

Fastly is sub 5 seconds.

> And while we were admittedly late to the DevOps train, we have made up lost ground.

Not anywhere close. Your cache invalidation takes forever. Ability to assign tag objects does not exist. Engaging "professional services" to make a config change like it is 2002?


> Not anywhere close. Your cache invalidation takes forever. Ability to assign tag objects does not exist. Engaging "professional services" to make a config change like it is 2002?

Akamai also has a fast purge these days, sub 5 seconds as well I believe. Works nicely.


Fast purge only works on certain kind of objects in certain kind of configurations, not to mention the idea of "I would like tom make 20k purge requests in a second via an API" is met with stares.

Fastpurge is a hack.


Interesting. I only have experience with the occasional simple (manual) purge and I could verify the object was invalidated quickly. Can you elaborate on an example config where this goes awry?

Fast purge works with all kinds of site delivery ( and site delivery based ) products. Unlike modern CDNs Akamai other products ( such as for example VOD and media services ) do not live in the same object space and hence are not fast purge compatible.

You would think purging a video stream would be the same as purging a standard site delivery object, after all it the stream is http(s) accessible .m3u8 and a pile of .ts chunks but that's not the case -- in some cases it can take up to one hundred and twenty minutes.


I see, thanks for the info.

Disclaimer: I work at a company that is both an Akamai and a Fastly client. I worked with Fastly first, then Akamai after. Opinions are my own and not the views of my employer.

There has definitely been a push to catch up, but I wouldn't say you're there yet. The Terraform provider for instance is not up to par with the Fastly one. Having attended an Akamai DevOps workshop not so long ago, there didn't seem to be an easy way to use the CLI tools to configure a property in an idempotent way in a CI/CD pipeline. Maybe things are different now.

Akamai has some advantages for enterprise customers though such as easy assignment of cost using the CP codes. That's very handy. The integration with Let's Encrypt is also very nice.

Fastly delivers in it's simplicity and documentation. Whatever use case you need, you'll most likely find it in their docs and on their blog. Being able to use VCL to configure your caching is in my opinion way easier (and with less limitations) than the Akamai rule tree (even if you don't have previous Varnish experience). Varnish's finite state machine makes it easy to configure and debug any kind of behavior you want. The Akamai rule tree has caused me quite a few 'WTF' moments. It's difficult to debug when something doesn't behave as expected and there's a certain amount of 'black magic' in the behaviors sometimes that makes it difficult to judge what the outcome will be.

The good thing is that both companies have different features and get pushed to incorporate features deemed useful by their customers at one or the other provider.


> That speed does make a difference. When we switched from Cloudflare to Fastly we had about a 7% increase in completed sales.

Interesting. Is that for the flow that starts at https://www.srnd.org/sponsor => https://www.srnd.org/sponsor/pay or somewhere else?

(Those pages do load very fast.)


Honestly, I love Fastly but wasn’t expecting this? Seems early.

Maybe it’s just a good time to strike while the market is frothy.


This guy has compiled the breakdown of equity owned by founders at IPO, of all the recent (and some historical) IPOs:

https://grph.com/d/mzo1W9QP4Mk

You'd be surprised how random these appear. I guess there's no science to it, and much more chance is involved than we would like to admit.


Be careful making assumptions from those graphs. In some cases they include employee owners and in other cases they don't. For example, for Pagerduty, one of the cofounders is missing, presumably under "Other" since he's the only cofounder that's an employee. But for Google Larry and Sergey are listed. So I'm not sure how it was decided who goes in the graph.

Fastly's bandwidth prices are high to me: $0.12-$0.28/GB is 2007 rates for transit. Perhaps this is why when GitHub pages started capping bandwidth to lower limits they had also switched to Fastly? For a comparison, I pay less than $0.01/GB for a CDN right now, and the price of transit on average drops 40%/year so even that's above market rate now.

A lot of people (including Cloudflare) just give it out for free, making this really a space for enterprise plays, which at their size, have the clout to push for better rates across many similar competitors, or just running their own infrastructure.

I honestly just don't get the value add here. It really doesn't feel like an IPO play.


That's the card rate. Their bigger customers aren't paying that.

It's part of why I don't get it. Large companies have multiple competitors to negotiate with, driving down prices (and this does not look to change anytime soon, it's dropping 40%/year for transit).

We're talking about an S-1 here. Where's that growth going to come from? Cannibalizing Akamai is not a long term IPO strategy.


At my previous job, we had Fastly as a potential new CDN provider set up against our existing CDN provider and two other new potentials. After a few rounds of calls for bids, Fastly won out.

Based on my experience with the other providers they were also, by a large margin, the most modern - it felt like moving from a 2008 integration to a modern, fully RESTful API with great documentation and decent UI.

This is all anecdotal, but they did combine a great technical platform with great support. If transit prices are the same or similar for all providers in that size category, they have to fight on features and support instead.


They won based on what?

I agree re: cannibalizing Akamai and I don't have any inside knowledge at all, just a fan of theirs and the last two companies I've worked for have both been customers. There's the security/WAF stuff that could be hugely lucrative for the larger customers that decide that's a priority. There's also whatever future potential edge computing/serverless ends up having. They're already kind of perfectly positioned to ramp that up if they want to.

I think the easiest short term play is the loooong tail of CDN business that's out there that never would've bothered with Akamai because it's too expensive. Their partnership and bizdev folks are really smart about reaching out to small and mid market PaaS providers to set up coverage for all of the sites under their management. It only takes a few of them to take off in their niche to add up to a fairly significant amount of revenue.

unrelated: I'm a fan of Neocities, thank you for that :thumbsup:


Thanks, appreciate it. :)

I'm not experienced with S-1 filings, but don't they usually include the share price targeted and amount they're looking to raise? It's blank up top on the document right now. When is that typically filled in?

Not until closer to the IPO date after investors haven been able to make an evaluation and been pitched by company and bankers.

A 7x multiple on $140m in sales might suggest around $1b (divide that by # shares to get a possible share price).


Or look at Zoom's 47.7x multiple and you've got a $6.6B market cap

Where I work, a website/app in the top 20 US alexa rankings, we use Fastly and it is pretty great. If I have a complaint it's that their varnish version is old. The web UI is great, the service is great. I've never had an issue with them in my many years of working with them.

Yeah they run a fork of varnish 2 with inline C disabled, and a number of custom extensions (surrogate keys, tables/dictionaries, etc.) -- as well as likely customizations to support their massively multi-tenant deployment and more

Add in the necessity of porting vcl for all of their customers, and the prospect of upgrading varnish is obviously pretty daunting.

I suspect they'd rather focus on their new webassembly-based configuration solution, rather than try to keep up with changes in VCL


Is it a coincidence that there has been such a large uptick in IPOs just over 10 years after the real-estate meltdown? Or did the meltdown kill off a generation of unicorns that couldn't get funding during the crunch?

It's just anecdotal evidence, but I talked to a CEO that had just closed a deal with his A round from Kliner Perkins in August or September 2008. When it came time to fund the deal, the partners at the VC firm made the capital call and the LPs couldn't fund. So Kliner Perkins called the CEO and told them they had to cancel the deal.

The CEO had to make big cuts, couldn't pay rent on the building, etc. He eventually sold the company, but I wonder if it or others would have been IPOing around 2014-2015 if they had gotten the funding they needed.


I know very little about that world, but I heard Dodd-Frank made going public less appealing. The first google hit for "IPO over time"[1] shows a drop after the dot-com bubble around 2000 and not much around when Dodd-Frank got passed--but it seems surprisingly flat. Money has been really cheap since the recovery. This, plus the amount of VC money, has likely made the threshold for IPO much higher.

[1] https://www.statista.com/statistics/270290/number-of-ipos-in...


https://i.imgur.com/5ZwpT7z.png

Venture capital spending keeps rising. and a lot of it is coming from SoftBank https://www.recode.net/2017/10/11/16459856/softbank-biggest-...


I don't use fastly on the supply side but I do use it as a consumer and the services they host that I use have good responsiveness and availability.

The fastly engineers I know are nice people. They take care, they're smart and afaik they've stayed small and focussed as a group. They're active in operations groups, standards.

What's not to love?


How is this different than AWS?

At GitLab we use Fastly and we’ve been very happy with their service. It was fast to implement and greatly helped to speed things up. As a fellow Commercial Open Source Software (COSS) company I think it is cool that they are based on Varnish.

Legal | privacy