Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I don’t know. I think a good SSR library + some clever front end frameworks + an edge compute platform where you don’t have to worry about anny scaling and you’re done. I’ve seen some really impressive demos of stuff that is extremely dynamic (ie per request), rendered server side on first use for the immediate portion you need with transparent hydration in the background to download interactivity as needed. Sure it seems more complicated ha CDN + S3, but I think in actuality it’ll be simpler in application.

It all works super slick and conceptually there’s actually very little complexity for those applying it (the frameworks take care of the difficult bits). Since this is happening at the edge (in Cloudflare’s case, where I work, within something like 50ms of transit time to 95% of the population), and you’re only doing the initial view, you can get websites that load large pieces of content much more quickly than rendering it client side which is dominated by less powerful devices (ie SSR+transfer time is competitive) with less bandwidth/higher latency if you’re collating responses from different backends. You could even see that extending where they prefetch the raw data you’ll need to render the rest of the page in the background and either prepare a rendering, just bring it closer into the cache as a prefetch, or even push it to your browser proactively.

Now of course it’s possible this stuff won’t generalize but I don’t see any obvious obstacles. It feels like within 10 years this could be the dominant way client web apps will be written. I get that we originally used to do that but it’s about taking the best of both worlds (SSR let’s you get super high performance initial loads while dynamic client side handling let’s you handle interactivity better and doing it transparently makes your development process a heck of a lot simpler as you don’t really need to differentiate the code as much (whereas I think you would with SSG+SSR). For SSG you probably don’t need to find general pieces of content and set up a different layer vs changing some caching parameters / doing content based hashing transparently.

I think that’s maybe the direction OP was heading with in his remark of the distinction not being helpful.



view as:

Don't get me wrong, I definitely think the current movement and focus on a great UX is amazing! But as someone that cares a lot about scalability and operational complexity + cost, it hurts a little to see certain approaches that'll work great in small scale, but be troublesome at large scale be recommended so strongly :/

> edge compute platform

I would consider that quite a big difference in infrastructure. It can easily be done already right now, but the cost and performance equation between serverless compute (at edge or not) v.s. static file serving becomes noticeable at scale.

Let's make some pricing examples:

- We'll have 2 million "first page loads" per day (~60 million per month)

- Let's give our bundle a low estimate of 200KB

------------

Vercel's pricing (just to take a popular example)[0]:

- Pro tier: $20/month includes 1m execution units (EU, 50ms of CPU time) and 1TB bandwidth

- Then $2 per 1m EU and $40 per 100 GB bandwidth

- Let's ignore GB-hours for now (memory x runtime)[1]

- Let's assume you're always able to render your SSR page in 50ms or less (quite optimistic if you're doing any form of DB operations in your SSR)

AWS CloudFront pricing[2]:

- Always free tier: 1TB bandwidth, 10m requests

- Then $0.085 per GB bandwidth and $0.0100 per 10k requests

- (at scale you can get +50% discount on bandwidth by committing)

------------

We can then run the numbers:

- Estimate (200 KB) on Vercel:

  - (60m requests - 1m free) * $2 per 1m = $118/month

  - 200 KB * 60 million = 12,000,000,000 kb = 12TB

  - (12TB - 1TB free) / 0.1 * $40 per 100GB = $4400/month (you'd probably want an Enterprise plan at this point unless I did the math wrong here. Their FAQ suggests to reach out to their sales team which you should definitely be doing at this scale!)

  - = $118 + $4400 = $4,518/month
- Estimate (200 KB) on CloudFront:

  - (60m requests - 10m free) / 0.01 * $0.01 per 10k request = $50/month

  - 200 KB * 60 million = 12,000,000,000 kb = 12TB

  - (12TB - 1TB free) / 0.001 * $0.085 per GB = $935/month (which could be $440/month with reserved capacity)

  - = $50 + $935 = $985/month
------------

Totally open for having made a calculation error above, but this is the kind of thing that one needs to concern themselves about at scale. The above example is quite realistic, in fact it's a lot less than our usage at my current work.

Admittedly here we are seeing most of the cost come from bandwidth. If you substituted Vercel with AWS Lambda you would be able to benefit from the CloudFront pricing on this.

[0]: https://vercel.com/pricing

[1]: https://vercel.com/guides/what-are-gb-hrs-for-serverless-fun...

[2]: https://aws.amazon.com/cloudfront/pricing/?nc=sn&loc=3


Great to see someone care about this. So many discussions lack this consideration and ultimately companies pay for it. Yes, I've seen this many times and why there are e.g. consultants that come in to "optimize" the platform.

Cost isn't the only thing to care about but performance. Functions might not run in every data center that would have a CDN PoP. The SSR vs SSG comparisons always miss the point and claim SSR is faster but that's only true if you're in a big popular city. As you expand - what about the other customers?

Also what about security - that's probably even more important. Each request for Vercel has a higher cost than the equivalent from Cloudfront. What are the safeguards e.g. rate limiting or DoS protection? For Cloudfront you'd need to account for WAF and other expenses.

You don't even need a lot of customers - just a bad actor to easily take down your hosting on Vercel. There have been plenty of blogs of startups using serverless setups getting attacked that costed them 1000s - 10000s of dollars.

Side note: bandwidth costs vary. Vercel does it with 1 price, so things may vary depending on where your users are. Cloudfront costs can go up. You may also need Cloudfront functions to do a few things.


Yes most edge products have these flaws. Cloudflare Workers do not though (at least not the ones being focused on). They run in every single data center across the world and are collocated with the CDN that you can access programmatically. I wish you all the best of luck if you try to take down the system. It’s handling an insane amount of load already. There’s also no billing for bandwidth.

You can manage DDoS and rate limiting protections although I don’t know the pricing on those (could be free - I don’t recall).

Disclaimer: work at Cloudflare


Not 100% unless you only have customers in the US. Correct me if I'm wrong but unless I pay for the business plan I may get re-routed on major ISPs in many countries. Have experienced this 1st hand.

For many workloads Cloudflare can be more expensive.


Workers run in every datacenter, but there are a variety of reasons your request might be rerouted to one that's not the closest one physically, such as:

* The closest one may already be at capacity, requiring rerouting some traffic away.

* ISPs in some parts of the world are weirdly fragmented, not interconnecting with other ISPs in the same region. As a result, if you're on the "wrong ISP", the network distance to the local colo may be longer than to some other colo that is physically further away.

* To serve content from servers in China, you must have an ICP license from the Chinese government. If you don't have that, Cloudflare will send Chinese traffic to the closest non-Chinese colo.

* Probably other reasons I'm not thinking of off the top of my head.

Note that all these apply to Cloudflare in general regardless of whether you use Workers. Enabling Workers has no effect on what datacenter you get routed to. Cloudflare's infrastructure team is always working to improve these situations, e.g. adding more servers, negotiating more connectivity, etc.

A nice thing about building on Workers is you don't have to worry about any of this. E.g. you don't need to think about redirecting your traffic when a colo is over capacity... it happens automatically.

(I'm the tech lead for Workers.)


Thanks for the reply.

I'm referring to e.g. https://community.cloudflare.com/t/does-free-plan-cloudflare... where the poster says:

"For predominantly Australian traffic, you’d probably need CF Business or Enterprise plan. For India traffic, would need CF Enterprise plan."

Where do workers run in this instance and what impacts are there say if I'm not on a business or enterprise plan?

Point being if we do need an enterprise or business plan for these features it's not 100% free.


Sorry, I'm not on the network team and I don't know whether that specific forum comment is accurate.

All I can say is that if you use Workers, the Worker always runs in the datacenter that receives your request, which is exactly the same datacenter that would have received your request if you weren't using Workers.


True for edge pricing which is why I don't use Vercel paid generally speaking. I run any larger apps on Hetzner in a VPS with NextJS running in a Docker container, which should perform identically last I asked the Vercel team. For your numbers it would likely be much cheaper, with the only downside being that you're in one geographic location, but I guess you could use multiple VPS in various locations.

I think your pricing is actually way off. Take a look at Cloudflare Workers. No egress charges and our operational cost is much smaller. We have two usage models where you don’t use a lot of compute (charge by request) vs when you need more (charge by GB-h). But our pricing is still waaay lower than Amazon’s edge platform.

Disclaimer: work at Cloudflare.


Way off as in that the Vercel calculation is off? It was a bit hard to find, but I would assume bandwidth in their pricing refers to egress at least.

That said, super interesting to see the CloudFlare dropped the egress costs![0] That would make the equation for them probably the most attractive, although that will only be the case as long as you are then not having your CloudFlare Workers talking with e.g. an AWS database which will then just move the egress costs there instead (internal traffic can sometimes be orders of magnitude higher than external).

If you only use it for SSR and only keep whatever SSR is using to offerings within CloudFlare, then it looks to be the most competitive offering for that so far. Your API can still go other places, so little downside there.

One concern I would have, having been quite deep in AWS Lambda, is the 128MB memory limit on CloudFlare workers (unless I misunderstood and it can go higher?). For JS frameworks, that very likely has performance implications, whereas for Rust frameworks, they will be able to perform well within the lower memory limits. Lambda is a bit different though, since the CPU scales with memory, so would be curious if there are any benchmarks on CF Workers with these frameworks?

[0]: https://blog.cloudflare.com/workers-now-even-more-unbound/


A couple points:

1. Last I checked (admittedly not recently), Lambda throttles CPU proportionally to memory size, e.g. as I recall a 128MB instance is throttled to 1/8 CPU core, 256MB is 1/4, etc. Workers never throttles.

2. The same code running on Workers will use far less memory than when running on Lambda. In fact, the average Worker uses around 3MB RAM. The difference here is that Lambda is counting the memory for the whole runtime process, e.g. a Node.js process, whereas with Workers the runtime is shared with other tenants and only the pure JavaScript heap usage counts against the memory limit.

So, 128MB goes further with Workers.

That said, obviously some apps need more. At present we don't have a way to increase this limit, but we're working on it.

(I'm the tech lead for Workers.)


I wonder if it's feasible to use even less RAM per worker. The pioneers of time-sharing would have considered 3MB per user extravagant. Of course, it practically doesn't matter now, but still, I wonder if it would be feasible to reduce the heap size per isolate by at least one order of magnitude.

Way off as in the claim that "edge compute is way more expensive" is only true if you only consider Vercel / AWS Lambda as the universe of edge compute.

If you consider Cloudflare Workers pricing, it becomes a lot more competitive. No egress + much lower per request fees + you can choose between per wallclock ms billing and per request billing (different slopes + intersection points).

I'm not claiming we're a good fit for every use-case out there. My root point though was about SSR and I think a platform like Cloudflare Workers is perfect for making SSR extremely simple.

Disclaimer: Work at Cloudflare. My opinions are my own.


Legal | privacy