Please don't make the mistake of differentiating SSR and SSG, even Next slowly moves away from this arguably bad naming. There is only server-rendering, and it can happen on request, or be pre-computed at build-time, or everything in between. The SSR/SSG thing is historical, trips most beginners, and may mislead implementation. Otherwise cool initiative, many Next devs are learning Rust too since it's part of the codebase.
Making i18n a special case is also something Next is moving away. I18n is just an example of personalization, like multi-tenancy, AB tests for instance. Support personalization and you get all those for free. From the readme it seems a good clone of Next 12, but a very small step behind the current rationales that explain Next 13 design choices. Again still very cool project!
I always thought of static sites as prebuilt from templates, whereas traditional SSR sites populate the templates (think jsp or php) on demand. Depending on the framework, the SSR page might already be cached and be effectively static.
Yes seeing it as a cache, where the cache key is a transformation of the user request (including requests to "static" pages) is a better model that leads to a unified vision of SSR/SSG
I get that you can see it that way. If you were going to use server-side rending (SSR) anyway you can claim that static site generation (SSG) is a special case supported by any SSR framework. Or perhaps even that SSR is a generalisation of SSG to support dynamic content.
But it feels like a bit of a stretch. The whole point of SSG is to simplify your architecture and tooling by forgoing the ability to dynamically render content on the server. It is a trade-off that won't be suitable for every application, and that's absolutely fine.
If you want to argue that SSR tooling can be so simple and cheap that the SSG trade-off isn't worth it, then I'm keen to learn about that.
But to elide the distinction between the two seems unhelpful to me.
I mean the framework can of course differientiate those use cases clearly, static and dynamic still exist in Next 13 App Router. But making them totally separate is a mistake too. Namely what bothers me is that a few points made in the docs or by users in this comment section show a limited visions of static rendering. It is certainly not limited to generic content that is the same for all users, I've built enough counter-examples to prove it.
Why is it a mistake to separate them? If can generate my site as a bunch of static content, upload it to a server, and have it served by nginx or similar, then I don't need to think about the next.js App Router, React, or indeed any custom server-side code. That's what static site generation is for, and it seems rather distinct from what you are talking about.
Yes it is limiting, and it won't be appropriate for many use cases. But if you can do it, it makes things simpler, cheaper and more efficient.
That's not to say you can't combine SSG tooling with dynamic content too if you need it. But static-only is a perfectly valid choice if you can get away with it.
Your answer conflates statically rendering a page (server-side prerendering at build-time), and statically rendering a website (exporting in Next.js terminology). You seem to talk about statically rendering the website while my point is more about statically rendering pages or layouts.
Anything that has to go through a server is not effectively static.
If you live in Tokyo and connect to a website in Oregon, this effectively static could be ~100ms (served by origin) vs actual static that's ~10ms (served by CDN local PoP).
Side note: this is why developers need to care about infrastructure and vice versa. I distinctly dislike the abstract separations as the implementation details always matter.
Latency issues for the round trip are a different topic. Static means unchanging. In this context, pre-rendered pages stored in HTML files. CDNs may inject dynamic content into the page or filter the request, but this is a digression.
That's also why you can render user specific content statically, you just need to redirect the user to the right page, for instance using an URL rewrite
An example from the doc about state: "For instance, you can't generate personalized dashboard pages at build-time, because you don't know yet who your users are"
That's an outdated take, it's been proven wrong, otherwise you wouldnt be able to implement static i18n. The extreme case where you have a version of the page for just 1 user is totally doable and can be relevant (say a version of the page just for the admin).
IMHO it's a huuuge mistake to try and squash these two very distinctly different things into one word (SSR), and it has added a ton of confusion to many with Next.js's move away from it.
- SSR: Dynamic rendering of paths, happening at runtime.
- SSG: Static rendering of paths, happening at compile time.
A separation makes it much clearer that the core difference here is that SSG can only take you as far as "something generic for everyone" and SSR will be able to render "a page unique for a specific user".
Infrastructure-wise there is a world of difference between the two approaches and they are not even close to being related.
- SSR: You'll need servers, scaling, load balancing, etc.
- SSG: CDN + S3 and you're done
EDIT: To add a bit, imagine the following "I'm using SSR.", "Ah, cool, like full SSR or only half SSR?", "Only half, serving static files" - at this point, it's clear that we've now overloaded SSR to be a less useful term to describe what it is, which I personally find quite unfortunate.
I don’t know. I think a good SSR library + some clever front end frameworks + an edge compute platform where you don’t have to worry about anny scaling and you’re done. I’ve seen some really impressive demos of stuff that is extremely dynamic (ie per request), rendered server side on first use for the immediate portion you need with transparent hydration in the background to download interactivity as needed. Sure it seems more complicated ha CDN + S3, but I think in actuality it’ll be simpler in application.
It all works super slick and conceptually there’s actually very little complexity for those applying it (the frameworks take care of the difficult bits). Since this is happening at the edge (in Cloudflare’s case, where I work, within something like 50ms of transit time to 95% of the population), and you’re only doing the initial view, you can get websites that load large pieces of content much more quickly than rendering it client side which is dominated by less powerful devices (ie SSR+transfer time is competitive) with less bandwidth/higher latency if you’re collating responses from different backends. You could even see that extending where they prefetch the raw data you’ll need to render the rest of the page in the background and either prepare a rendering, just bring it closer into the cache as a prefetch, or even push it to your browser proactively.
Now of course it’s possible this stuff won’t generalize but I don’t see any obvious obstacles. It feels like within 10 years this could be the dominant way client web apps will be written. I get that we originally used to do that but it’s about taking the best of both worlds (SSR let’s you get super high performance initial loads while dynamic client side handling let’s you handle interactivity better and doing it transparently makes your development process a heck of a lot simpler as you don’t really need to differentiate the code as much (whereas I think you would with SSG+SSR). For SSG you probably don’t need to find general pieces of content and set up a different layer vs changing some caching parameters / doing content based hashing transparently.
I think that’s maybe the direction OP was heading with in his remark of the distinction not being helpful.
Don't get me wrong, I definitely think the current movement and focus on a great UX is amazing! But as someone that cares a lot about scalability and operational complexity + cost, it hurts a little to see certain approaches that'll work great in small scale, but be troublesome at large scale be recommended so strongly :/
> edge compute platform
I would consider that quite a big difference in infrastructure. It can easily be done already right now, but the cost and performance equation between serverless compute (at edge or not) v.s. static file serving becomes noticeable at scale.
Let's make some pricing examples:
- We'll have 2 million "first page loads" per day (~60 million per month)
- Let's give our bundle a low estimate of 200KB
------------
Vercel's pricing (just to take a popular example)[0]:
- Pro tier: $20/month includes 1m execution units (EU, 50ms of CPU time) and 1TB bandwidth
- Then $2 per 1m EU and $40 per 100 GB bandwidth
- Let's ignore GB-hours for now (memory x runtime)[1]
- Let's assume you're always able to render your SSR page in 50ms or less (quite optimistic if you're doing any form of DB operations in your SSR)
AWS CloudFront pricing[2]:
- Always free tier: 1TB bandwidth, 10m requests
- Then $0.085 per GB bandwidth and $0.0100 per 10k requests
- (at scale you can get +50% discount on bandwidth by committing)
------------
We can then run the numbers:
- Estimate (200 KB) on Vercel:
- (60m requests - 1m free) * $2 per 1m = $118/month
- 200 KB * 60 million = 12,000,000,000 kb = 12TB
- (12TB - 1TB free) / 0.1 * $40 per 100GB = $4400/month (you'd probably want an Enterprise plan at this point unless I did the math wrong here. Their FAQ suggests to reach out to their sales team which you should definitely be doing at this scale!)
- = $118 + $4400 = $4,518/month
- Estimate (200 KB) on CloudFront:
- (60m requests - 10m free) / 0.01 * $0.01 per 10k request = $50/month
- 200 KB * 60 million = 12,000,000,000 kb = 12TB
- (12TB - 1TB free) / 0.001 * $0.085 per GB = $935/month (which could be $440/month with reserved capacity)
- = $50 + $935 = $985/month
------------
Totally open for having made a calculation error above, but this is the kind of thing that one needs to concern themselves about at scale. The above example is quite realistic, in fact it's a lot less than our usage at my current work.
Admittedly here we are seeing most of the cost come from bandwidth. If you substituted Vercel with AWS Lambda you would be able to benefit from the CloudFront pricing on this.
Great to see someone care about this. So many discussions lack this consideration and ultimately companies pay for it. Yes, I've seen this many times and why there are e.g. consultants that come in to "optimize" the platform.
Cost isn't the only thing to care about but performance. Functions might not run in every data center that would have a CDN PoP. The SSR vs SSG comparisons always miss the point and claim SSR is faster but that's only true if you're in a big popular city. As you expand - what about the other customers?
Also what about security - that's probably even more important. Each request for Vercel has a higher cost than the equivalent from Cloudfront. What are the safeguards e.g. rate limiting or DoS protection? For Cloudfront you'd need to account for WAF and other expenses.
You don't even need a lot of customers - just a bad actor to easily take down your hosting on Vercel. There have been plenty of blogs of startups using serverless setups getting attacked that costed them 1000s - 10000s of dollars.
Side note: bandwidth costs vary. Vercel does it with 1 price, so things may vary depending on where your users are. Cloudfront costs can go up. You may also need Cloudfront functions to do a few things.
Yes most edge products have these flaws. Cloudflare Workers do not though (at least not the ones being focused on). They run in every single data center across the world and are collocated with the CDN that you can access programmatically. I wish you all the best of luck if you try to take down the system. It’s handling an insane amount of load already. There’s also no billing for bandwidth.
You can manage DDoS and rate limiting protections although I don’t know the pricing on those (could be free - I don’t recall).
Not 100% unless you only have customers in the US. Correct me if I'm wrong but unless I pay for the business plan I may get re-routed on major ISPs in many countries. Have experienced this 1st hand.
For many workloads Cloudflare can be more expensive.
Workers run in every datacenter, but there are a variety of reasons your request might be rerouted to one that's not the closest one physically, such as:
* The closest one may already be at capacity, requiring rerouting some traffic away.
* ISPs in some parts of the world are weirdly fragmented, not interconnecting with other ISPs in the same region. As a result, if you're on the "wrong ISP", the network distance to the local colo may be longer than to some other colo that is physically further away.
* To serve content from servers in China, you must have an ICP license from the Chinese government. If you don't have that, Cloudflare will send Chinese traffic to the closest non-Chinese colo.
* Probably other reasons I'm not thinking of off the top of my head.
Note that all these apply to Cloudflare in general regardless of whether you use Workers. Enabling Workers has no effect on what datacenter you get routed to. Cloudflare's infrastructure team is always working to improve these situations, e.g. adding more servers, negotiating more connectivity, etc.
A nice thing about building on Workers is you don't have to worry about any of this. E.g. you don't need to think about redirecting your traffic when a colo is over capacity... it happens automatically.
Sorry, I'm not on the network team and I don't know whether that specific forum comment is accurate.
All I can say is that if you use Workers, the Worker always runs in the datacenter that receives your request, which is exactly the same datacenter that would have received your request if you weren't using Workers.
True for edge pricing which is why I don't use Vercel paid generally speaking. I run any larger apps on Hetzner in a VPS with NextJS running in a Docker container, which should perform identically last I asked the Vercel team. For your numbers it would likely be much cheaper, with the only downside being that you're in one geographic location, but I guess you could use multiple VPS in various locations.
I think your pricing is actually way off. Take a look at Cloudflare Workers. No egress charges and our operational cost is much smaller. We have two usage models where you don’t use a lot of compute (charge by request) vs when you need more (charge by GB-h). But our pricing is still waaay lower than Amazon’s edge platform.
Way off as in that the Vercel calculation is off? It was a bit hard to find, but I would assume bandwidth in their pricing refers to egress at least.
That said, super interesting to see the CloudFlare dropped the egress costs![0] That would make the equation for them probably the most attractive, although that will only be the case as long as you are then not having your CloudFlare Workers talking with e.g. an AWS database which will then just move the egress costs there instead (internal traffic can sometimes be orders of magnitude higher than external).
If you only use it for SSR and only keep whatever SSR is using to offerings within CloudFlare, then it looks to be the most competitive offering for that so far. Your API can still go other places, so little downside there.
One concern I would have, having been quite deep in AWS Lambda, is the 128MB memory limit on CloudFlare workers (unless I misunderstood and it can go higher?). For JS frameworks, that very likely has performance implications, whereas for Rust frameworks, they will be able to perform well within the lower memory limits. Lambda is a bit different though, since the CPU scales with memory, so would be curious if there are any benchmarks on CF Workers with these frameworks?
1. Last I checked (admittedly not recently), Lambda throttles CPU proportionally to memory size, e.g. as I recall a 128MB instance is throttled to 1/8 CPU core, 256MB is 1/4, etc. Workers never throttles.
2. The same code running on Workers will use far less memory than when running on Lambda. In fact, the average Worker uses around 3MB RAM. The difference here is that Lambda is counting the memory for the whole runtime process, e.g. a Node.js process, whereas with Workers the runtime is shared with other tenants and only the pure JavaScript heap usage counts against the memory limit.
So, 128MB goes further with Workers.
That said, obviously some apps need more. At present we don't have a way to increase this limit, but we're working on it.
I wonder if it's feasible to use even less RAM per worker. The pioneers of time-sharing would have considered 3MB per user extravagant. Of course, it practically doesn't matter now, but still, I wonder if it would be feasible to reduce the heap size per isolate by at least one order of magnitude.
Way off as in the claim that "edge compute is way more expensive" is only true if you only consider Vercel / AWS Lambda as the universe of edge compute.
If you consider Cloudflare Workers pricing, it becomes a lot more competitive. No egress + much lower per request fees + you can choose between per wallclock ms billing and per request billing (different slopes + intersection points).
I'm not claiming we're a good fit for every use-case out there. My root point though was about SSR and I think a platform like Cloudflare Workers is perfect for making SSR extremely simple.
Disclaimer: Work at Cloudflare. My opinions are my own.
Yup (static site generation)! But more complex than a traditional SPA setup.
Traditionally an SPA has been just one single bundle file that is downloaded. With SSG you pre-generate a file per path which gives the user many of the benefits of SSR on that first page load, before the SPA part takes over again.
But...SSG is how pages traditionally worked. You have some files in a folder and use <a> to link between them, no dedicated router required. Routers are needed in SSR though, depending on your use case.
And traditionally it was limited so different expectations. If you needed more you'd use PHP or something else. Problem here as other commenters have mentioned is how the line between SSR, SSG and others have been blurred.
We're now given something that has routing i.e. NextJs but then allowed a SSG option that behaves somewhat different.
SSG in Next vocabulary (which I employ because Perseus claims to be a Next alternative) is prerendering a page at build-time, not necessarily the whole site. So Next will still allow client-side routing, with its hybrid approach. What you describe is an exported app, sometimed named SPA (the name is wrong but people get the point), MPA or static app.
Yes and no, it's caching at application level instead of network level with cache headers. HTTP caching is slightly more limited because the cache key is harder to tweak, so it's harder to cache paid content this way for instance.
Many answers are in this thread, I should have made my point clearer: from Perseus doc, it has a limited vision of static rendering, that stems from a slightly incorrect mental model. Statically rendering and not needing a server are different subject, and statically rendering is not limited to generic content. This can be easily proven if you think statically rendering a page as precomputing the render at build-time. I've written multiple time about this subject and Next 13 App Router design choices are consistent with this theory. This makes Next much more powerful that many competitors, as it covers all use cases of static rendering.
See, that's the problem: you said SSG can only take you as far as "something generic for everyone".
This is not true, otherwise you could not implement static i18n.
Also I don't want to squash them either but dynamic vs static is for instance much more appropriate, it's a shortcut for request-time vs build-time server rendering.
What is generally meant by SSG is that at runtime you send fully usable static pages and don't need to run any app logic. This radically expands your options for hosting (S3 + CDN, done). You can't have personalization unless you run app servers, you can provide some alternatives at build time (localization, AB tests) but nothing actually personalized. Whereas with SSR you run an app aware server, which means you can store client state and do everything with it sending to the browser final HTML, but it is more costly.
So to me and many others whether a framework can do SSR or SSG is a very important distinction, critical for infrastructure planning. I often see it being downplayed together with a promotion of various proprietary platforms offering edge computing but this stuff is just not necessary for many cases.
Edit: No idea what terminology is used at NextJS, above is what I think is generally understood by SSG (static site) and SSR (server-rendered site).
That's not Next wording, what you are describing seems to be an exported app (reintroduced in Next 13.3 via export output, and available in Next 12 via next export)
SSG in Next is just pre-rendering some pages server-side at build-time, it's not so involved
Static i18n is pretty easy to implement though? I’m Next.js I’ve previous solved it by making the chosen language part of the route. Sure, it generates more files, but that doesn’t really matter at all (cheap to host still, cheap to serve still).
I could get behind static vs dynamic, but “server-side” has no meaning then to me, since there is no server/backend men at to be involved for the static part then (at least in my mental model of it).
Your static files still need a webserver. Maybe you do not need to write it, or configure it; but there is still one there. You probably just pay S3 or a CDN to do your web serving for you.
That is, practically speaking, an enormous difference though!
Any CDN (CloudFront, CloudFlare, etc) coupled with any blob storage (S3 and others) vs the following:
- Servers to run your node process
- Load balancing to handle distributing traffic between several servers for your SSR’ed app
How do you then run and update your servers? You now need to start thinking about zero-downtime rollouts, and grab something like ECS or the big hammer k8s. You can still handhold your own EC2 instances, but then you’re now home brewing a solution.
The simplification of infrastructure that “static files” bring is not to be underestimated :)
There is always a server, CDN are servers, so even a totally static websites exported as HTML pages can benefit for user-level or segment-level personalization if the host has enough features. A simple URL rewriting layer is the only thing you need. Your mental model is consistent with what I see from Perseus, but I think that Next is already one step ahead, and this step is to understand that there is a always a server and a user request around even when you render statically. This was my initial point, probably poorly phrased.
Sure, but in common parlance that’s not really a useful point. There are also servers involved in serverless, but we’ve largely accepted that here it is meant to indicate (on a gradient) who manages those servers :)
When you set up your infrastructure, it will matter quite a lot whether you’re able to utilize a CDN as the only thing you need to deploy to, or if you need to run your own code at runtime (and hence need a something to run it on).
It’s worth noting: You can absolutely go for all of these solutions! I do have a bit of a penchant for solutioning things that I know will scale up massively, but that is not always a priority early on, so I don’t want anyone to be discouraged by these approaches and the other benefits they can provide :)
> there is always a server and a user request around even when you render statically
I’d love to dive into that part a bit more, genuinely curious! Is the point that there is always dynamic information to act on to improve the user experience? Or what would be the goal or vision once you know that?
I am mostly thinking about personalization (AB tests, i18n, marketing segment, multi-tenancy belongs to this family of use cases), it's often assumed that you can't achieve personalization with static rendering, and that's just not true: a simple (read very fast, very cheap, optimized for this use case, like CDN are supposed to be) URL rewriting server can point to the right statically rendered page at request time. Another approach is modifying the page at the edge, Eleventy does that. All this involve being aware that your static content will be accessed using an HTTP request, taking that into account may lead to a more unified (not squashed though, just unified) SSR/SSG architecture similar to Next 13 one. You can look for Plasmic take on AB tests with Next or my own work on Segmented Rendering or the "megaparam" pattern.
An URL rewrite doesn't involve making the language part of the route. It's true that you use a route parameter, but the user doesn't have to see it. Look for Segmented Rendering or the Megaparam pattern for my work on this idea.
Looks great. Is it possible to use css frameworks with it (let's say bootstrap etc). I'm trying to understand where this fits in the overall stack. Is it possible to use react libraries with it.
I think the key feature of these frontend frameworks is they enable Rust developers to do full-stack web development in the same way Nodejs enabled front end developers to do full-stack.
Congrats on the release. I have to admit looking at the code samples, this looks like some pretty tough to swallow ergonomics for real-world usage. Is this inherent to Rust web libraries?
This is great! I've had a bit of a play around and the code examples are nice and easy to follow. Definitely liking it more than the JSX route that a lot of modern frameworks are taking these days - this way is much cleaner and easier to read.
I’m struggling to see how this is cleaner and easier to read than JSX.
JSX resembles web technologies, it “smells and looks like” HTML and JavaScript. It’s easy to pick up and read. If you know HTML and JavaScript, onboarding to React isn’t bad.
However, this framework looks like an abomination for real world usage. It’s in WASM, written in Rust (!?). It doesn’t resemble any sort of web technology other than the tag names. How do you onboard a frontend team to this?
Perhaps to rust-aceans, this makes sense. Is there something I’m not seeing here?
I think that whole point is to not resemble Javascript. I kinda like it, in my experience current state of JS is layers and layers of tools stitched together over language that was designed for much simpler times. Compared to other stacks I worked with it's often unstable, slow and poorly documented. I tend to avoid doing fronted if I can.
Fresh and different look at this topic is something I would very much like to see. Of course for people working with front end today may have different perspective.
It still targets the browser though, so you still get all the complexities anyway, just without well supported syntax. Idk man I like Rust but this doesn't seem like the right way here.
I agree with you. I don't think anyone who has spent enough time with React/Vue.js would want to switch to something like this. I cannot imagine what the code would like when you have a lot more components and nesting.
I would write frontend code like this if I really want to punish myself.
If I were looking for an alternative in NextJS, looking for something built in Rust would definitely not be in top of my mind. I would assume this kind of movement from JS -> Rust would be quite rare.
More to the point in terms of wider adoption for any of these JS in X languages: I've never been involved in a company that would want HTML/JS level code being produced at Rust developer salary overhead costs.
They're great weekend project and prototyping things but yeah.
My friend (ex-coworker) was involved in one about 1.5 years ago for a few months.
It was basically a one man startup and the argument to use Rust was because the guy wanted to use WASM on the frontend, because it was a mind mapping software (as far as I remember, and so information heavy, blablabla).
WASM might make sense or not, dunno. Figma somehow manages to be fast, I don't think they use WASM. (They are fast because of canvas, right?)
I think it'd be less about being info-heavy and more about needing to reimplement everything on top of a canvas anyway. Not saying it's impossible to implement a mindmap with html, but it's possible to want mindmap features that would be much easier to implement with canvas.
Rust becomes more viable the simpler the API between it and your JS is, and "takes events and renders pixels" is a very simple API.
> Figma somehow manages to be fast, I don't think they use WASM.
Figma not only uses wasm but pioneered it. They're consulted be standards bodies, they implemented wasm features in browsers, ...
The versatility of Rust is quite impressive. Its looking like given a few years for these frameworks to mature, Rust could end up being a great job for just about every task.
Rust is both a low level but also a high level language. There aren't many mainstream languages that can go higher level than Rust with its traits, sum types, generics and macros. Haskell, Scala, maybe C++ to some degree, what else?
I don't think Kotlin is higher-level than Rust. It misses most metaprogramming features, and its generic programming capabilities are weaker (nothing remotely as capable as traits/implicits/type-classes).
I've said it misses most of metaprogramming features, not all of them. Reflection can't do the same things as macros can. Macros can transform arbitrary AST to arbitrary AST, reflection can't.
Similar thing with interfaces - while they serve the same purpose as traits, they are nowhere as flexible/expressive. E.g you cannot do blanket interface implementations - i.e. implement an interface for only the classes that implement another interface. You also cannot implement the same interface more than once, differing by generic parameters only. Or cannot define an interface with an associated type member (Scala is another language that can do it).
The discussion was about being higher-level vs lower-level. Rust is a more expressive language than Kotlin. Where languages are used is a different matter, because expressiveness is not the only thing playing role there. Politics and historical reasons are another dimension.
You seem to be missing my point. The point was not about which languages are mainstream (which is highly subjective, hard to measure and also may be an effect of certain politics), but about their level of abstraction. Rust is often placed in the "low level, close to the hardware" bin because it can go low-level very far if you wish to. But it is often forgotten that it can be also very high in terms of expressiveness / abstraction power and offers productivity features not found in many mainstream languages.
This is an old myth thats no longer fully true. Rust is a language that straddles both high level and low level in the sense that it is literally indistinguishable. You can use rust as if it were a high level language without knowing anything about low level stuff. The binary you get at the end, ends up being low level in the sense that it's performant as if it was written in a low level language.
There is definitely a mental overhead with rust but this has nothing to do with low level or high level concepts. The borrow checker is actually a high level feature that exists for safety, not performance.
> You can use rust as if it were a high level language without knowing anything about low level stuff.
How can you write that with a (I suppose) straight face? Rust makes you think about lifetime of every single variable, as anyone who's written a couple of lines of Rust knows... it' not just the lifetime annotations you will need when you get past the "copy everything" phase and start using or writing data structures, but the borrow checker making even the simplest stuff something you actually have to think through carefully (do you need to pass a reference, make a copy, use a mutable variation of some function, open a new block to limit the scope of the variable, assign it to a local variable to avoid it getting out of scope too early?). This is plainly "low level" stuff you need to worry about all the time which you just don't in any high level language.
These are high level safety features that replace another sibling high level construct: The garbage collector.
It's a different style of programming but still high level. Make no mistake the abstractions are high level but the cost is zero, hence the term zero cost abstraction and the association with "low level."
Here’s an analysis of cold starts on AWS Lambda (updated daily) that backs up your claim that Rust has the fastest cold starts - https://maxday.github.io/lambda-perf/. Although to be fair, Go isn’t that far behind on warm starts.
One caveat - these are hello world programs without I/O. The maintainer plans to add I/O to the benchmarked code.
Is not "hello world programs without I/O" basically measuring the startup time of the language's runtime? Not that that's not important, but given that Rust and Go are the only two compiled languages there, and Go has a slightly heavier runtime, these times are entirely what I'd expect.
When I hear "cold start" I usually think of the amount of time between process exec to when my first line of code starts executing. In this case I would just assume that Rust will be faster since it's pretty much how long it takes to load the binary image into memory and jump to the _start/main entrypoint. Go has a whole runtime to boot up before ever getting to that point. Many, many layers of setup code. So it's not surprising at all that it has a significantly slower "cold boot".
> Go has a whole runtime to boot up before ever getting to that point. Many, many layers of setup code. So it's not surprising at all that it has a significantly slower "cold boot"
That’s one misconception you have about Go, when it comes to Cold start uptimes Go is one of the top garbage collected language to compete the likes of C, the go runtime is a progressive runtime it runs together with your code so it’s basically a Vlang code with added C codes
Yeah… I have two containers running on google cloud run right now, the nextjs container cold starts an average of 1.5 seconds. The actix web rust container cold starts in 150 ms. The difference is staggering, I can’t even tell when the rust container cold starts
I think that Typescript is a gateway drug for a language like Rust. Developers who like the combination of strictness and ergonomics of strict Typescript could be attracted to taking that “to the next level” with Rust.
This is contrasted to Go, which I think turns away these developers (at least it did for me…) due to having null-pointer/non-existent nullability.
> This is contrasted to Go, which I think turns away these developers (at least it did for me…) due to having null-pointer/non-existent nullability.
As opposed to Typescript's null, undefined, or optionally defined types? I don't use Go or Rust, but I do use Typescript so maybe I'm a bit confused by this statement. Go allowing nullability doesn't seem like a dramatic shift from Typescript's 3 different ways to represent a potentially ill defined object, and it wouldn't turn me away from the language.
null/undefined is type checked in Typescript so the compiler will make sure you check whether your value is defined or not and handle it appropriately.
Go having nullability isn’t the problematic part - it’s how nil is not type checked, so a value might secretly be nil and you won’t know until your program crashes at runtime.
Rust avoids this by just not having null, instead using things like Result and Option types, which is pretty neat with pattern matching.
Do you think moving to Rust would help you make enough money to retire in the next 3 years? If yes do it. It will definitely help you save the rest of your life and spend more time with your loved ones.
If you think Rust doesn't really matter for that and you can even do that with PHP and it's the problem and the execution that matters then focus on building a great product instead of getting into the Hype wagon and missing out on the bigger picture of what really matters.
Sometimes you need to understand broader surrounding to make sure you're building something great. I think every mature developer should understand tooling in his/her area and possibilities it brings.
Do you want, in the next 20 years, to have cardiologist that's focused on early retirement, not on the result?
I personally really like using Rust, but I don't think it's necessarily better than go. There are some areas where Rust will definitely be better or more ergonomic, such as bare-metal or low level systems programming tasks, but many other areas where Go's design works better and Rust will only hold you back (looking at you, borrow checker).
That said, I would still recommend learning Rust, because it will teach you about lifetimes and aliasing, even if you don't want to learn about them. When building for the web, I think Go's easier dynamic dispatch and GC will save you some headaches though.
It really depends on what you need to do. I work professionally with go, but do most of my own work in Rust (web, system, tooling, etc.)
Rust is definitely strongest on the system / tooling level, it is what it is built for and what gives the least friction. If you need to use a lot of macros to do your work, you basically throw away most LSP (Intellisense), which can be quite bothersome to work with.
Rust is definitely capable, but writing actual services and such in web rust, takes a lot of work, and isn't as friction-less as building tooling / low level code.
I don't know you so the following could be wrong or condescending, please take it as a general consideration towards people in a similar state of computer science fatigue and it really talks more about me than you: before learning a new language only because of a latent feeling of guilt please consider there is a world of knowledge not directly related to CS and STEM that could literally broaden your senses or at least the way you approach reality.
Think of learning a musical instrument: you will literally perceive new nuances in an aspect of reality, akin to see new colors.
You have only one shot at this, are you sure, in the end, another programming language will make any difference at all?
I use Go for work and Rust for personal projects. I use Rust because it makes programming fun. Programs just work after they compile, and often are fast with minimal effort spent optimising. If that sounds compelling to you, then it might be worth learning and getting over that initial learning curve.
But if it doesn’t, that’s fine too. Go is a great language that works well in many domains. You won’t have trouble finding work in Go for years. It’s unlikely that you’ll have to solve a problem at work that can’t be solved in Go. You’ll be fine. Don’t guilt yourself about it.
I’m not sure you are wasting your time by not learning Rust. I think the value of programming languages depend the most on where you live (and where you want to work) and very little on global tends. In my area of Denmark there is virtually no Rust jobs and there haven’t been for the past few years, there are even fewer Go jobs. Plenty of Java, PHP and C# jobs, and basically everyone uses Typescript. Rust has seen some uptake in a lot of the C++ community, but you’re going to need to be a proficient C++ programmer to get at those jobs, and you likely will for at least a decade.
That being said. We recently build a replacement for Sharepoint because we sort of out-grew it and couldn’t find a decent headless CMS that fit our needs and also complied with the EU legislation for the Energy Sector, and since it had to be actually worth moving away from Sharepoint we actually did some thorough proof of concept testing before other needs basically dictated it needed to be ODATA and thus we ended up building it on C#. Anyway, two of the prototypes were Rust and Go, and I would never use Go professionally again if I could help it. Rust on the other hand seems like it’s genuinely a good language that sees actual real world adoption for major projects. I loved Go by the way, but we kept running into packages and libraries which can basically best be described like this: “Y needed to do X and build a library for it. Two years later Y got a new job and the library was semi-abandoned because nobody else really picked it up”. I know a lot of people are building a lot of things with Go, but we kept having to either reinvent the wheel with it, or to become guy X ourselves. Rust didn’t seem to have these issues, likely because it sees more adoption at more places which have similar requirements that we do.
I’m not convinced I’ll ever see Rust blow up in my part of the world though. Out of all the recent hyped languages going back to Ruby on Rails, however, I think Rust may have the biggest shot at it. But I fully expect Typescript, Java, C# and PHP with some Python to remain dominant for the next 10-30 years outside of the areas where they do C++.
> I’m not sure you are wasting your time by not learning Rust. I think the value of programming languages depend the most on where you live (and where you want to work) and very little on global tends. In my area of Denmark there is virtually no Rust jobs and there haven’t been for the past few years, there are even fewer Go jobs. Plenty of Java, PHP and C# jobs, and basically everyone uses Typescript. Rust has seen some uptake in a lot of the C++ community, but you’re going to need to be a proficient C++ programmer to get at those jobs, and you likely will for at least a decade.
It is the same for me too, almost zero Rust jobs where I live. However, I still benefited greatly from learning Rust, making me a better programmer overall. Now I think more about memory allocations, ownership, how I am passing the data around, and mutability.
The thing I want the most from Rust in other languages is the immutability unless you use the mut keyword. I know a lot of OOP or Dynamic loving people might scoff at that, but after a couple of decades in SWE I'd really like for everything to be immutable by default. It's just so much easier to maintain the code-bases and systems over those 5-10 year periods, when being smart is disabled by default.
Sure, it shouldn't be that way, but I prefer it when we design our systems for the code we write on a Thursday afternoon. You know, those days where you've been up all week because of the baby crying and you've spent too much of the day in meetings that shouldn't even have been an e-mail. Because that's the sort of code someone is going to hate 3 years down the line, and chances are that someone might even be yourself. Unfortunately things seem to be going in the opposite direction with some languages. I didn't work with C# for a while, and I was surprised to see "var" everywhere when I came back to it.
I'm in the same situation. I really want to learn Rust, but prefer Go for high-concurrency code, and use C++ out of habit for other things. Rust (the syntax) grew too large, so I just feel lost anytime I read it. Cargo seems nice, though.
Started picking up Verilog instead... It doesn't risk overlapping in the Go/Rust/C++/C category.
There are so many things I want to learn and do, and so finite attention and lifespan. I'll have to rely on others doing great things with Rust. I'll focus on extracting more value out of the ones I know already.
lol I was just thinking this, literally just a few minutes ago. I saw a rough benchmark test between Rust and Node, and the memory consumption over time stayed completely flat with Rust while Node just skyrocketed. Pretty interesting.
Yeah, the minimal memory usage is actually more appealing to me than the speed. The latter is much more important in reality though, as more RAM can always trivially be thrown at a problem.
Rust won't give you a lot that Go doesn't give you.
Certainly not a better salary. Beyond that... Rust forces you to care about allocation, deallocation, borrowing, all the low level stuff, while not necessarily being faster.
To me it doesn't feel like Rust beats Typescript or Go as a higher level language at what Typescript is good at, and I don't see why that would change. Some performance critical things to port to WASM maybe...
I have use +12 langs professionally. And personally picked always some language that was not the "mainstream". I from Colombia, so not even mainstream worldwide, but probably unheard among here, much less to ship real code with paying customers.
Plus, I already test a lot stuff (not just langs, but frameworks, etc). Learn from many langs that never use for more than their more basics tutorials (like kdb+) just to see what was the deal.
Is like a hobby to me! But mainly, because as solo-dev anything that give me an extra advantage AND is suitable for a the challenges of a solo-dev I wanna know.
So, learning not means "fully committing and betting my work on it". Is pretty certain you will learn very good stuff with Rust that will be valuable to know in general. Could be the same for Zig.
What!? You missed out on the last full decade of FoMO? :D
That said, there is value in learning a strongly typed language. If you like it you might want to introduce concepts in your Go work, if you really like it you will probably want to switch to one, and if not, that's also completely okay, and you'll know what you don't like about it.
For example I like Rust, Scala, TS, but Haskell always makes me cry a bit. Spent many years using Python (py2 to py3 migration was fun), but a few years ago I realized that it's a bad trade off for almost anything I work on, and makes me rage a bit every time I run into super dumb errors, so I avoid that too :)
For me, I see value in learning one systems-level language well to round out my generally higher level language focus.
I never used Go, so choosing Rust for this was easy. Go does not satisfy some tasks in the way that Rust does, but Go likely does just fine for the reasons you originally learned it.
However, I do see Rust as the best way to broaden my horizons and career opportunities. Suddenly, realms where C and C++ would be used exclusively, I can now more easily play in. Many times, Go would not qualify for these types of projects because of its garbage collector.
So from language tooling to browsers, learning Rust may help me debug issues down the stack or help me creatively solve performance issues by plugging it in, and that feels like a win to me and wholly worth the price (or is it pain?) of admission. First class WASM support doesn't hurt either.
I had the same feeling as a C# developer. I took some weekends to write an algorithm (Monte Carlo financial simulation) in Rust, Go, JavaScript, and C#. You should do something like that.
What I learned is that Rust has by far the best runtime and memory performance, but .NET Native AOT was second best (even better than Go!) and JavaScript was 30x slower than Rust.
But Rust took me probably over 40hrs (hard to say because I worked on it little by little), the learning curve was steep. Its standard library is spare, and selecting the "best" community crate leads to decision fatigue. I banged out the same algorithm in my most proficient language (C#) in an afternoon.
In business, speed to deliver almost always trumps speed of code runtime. Some algorithm hot spots deserve the Rust treatment, but in general, just ship the code to your customers.
That's a long way to say, put some energy into learning Rust sure, but don't beat yourself up about it. It's not a general purpose language like Go or C#. It's a dream for performance critical algorithms.
If you feel like that maybe you ought to learn it on your spare time, and then make up your mind whether you should or should not keep investing on it.
I went down this Rust front end route with my trivia web app [0], but ended up abandoning it.
There were just too many points of friction.
* The tooling is not nearly as good as JavaScript for the front end.
* Every time you want to use a front end library, you have to figure it out yourself as there won’t be any documentation for how to use it with your framework.
* Rust compile times really break your flow when working on front end code. At least for me, for front end coding, there is a lot of interactive development where you look at the web app on desktop/mobile and adjust element size/spacing etc to make sure things look nice. Doing this in real-time with JavaScript/html/css is nice. Having a Rust compile between each step is a pain.
* For front-end code, Rust’s borrow checker really gets in the way. The JavaScript garbage collector works really well front front-end code. Rust’s borrow checker is not bringing any performance advantages for this use case and brings a lot of complexity. On the backend, you can justify the complexity of Rust with appeals to efficiency/scalability/server cost. With front end code which is running on the client, you lose that justification.
I ended up going with React/TailwindCSS and have been happy so far.
>Doing this in real-time with JavaScript/html/css is nice. Having a Rust compile between each step is a pain.
I'm more of a JS-lite builder. Typically I will only decorate pages with the necessary JS for interactivity. Happier using traditional SSR. Pages load with no spinner or reflow. Sometimes I inline a JSON blob for state.
For the layout tweaks you describe, I typically edit the displayed page in Chrome's developer tools. Once I am satisfied I apply those changes to the template and JS. This feels more ergonomic than refreshing the page.
I imagine the same technique would apply to the unified Rust front-end/back-end experience. Just edit the rendered page in developer tools.
I've never been an early adopter. Let's see how this shapes out for version 1.0 or if another framework overtakes it.
Hot reloading is still a widely loved feature for server side rendering. It is part of what mad PHP widely popular and many other frameworks have auto recompilation and reloading on watching changes. The smoothness of automatic recompilation falls apart the longer it takes to compile and rust is not known for fast compiles.
How is this more ergonomic vs page autorefreshes? While you lose your favorite editor tooling by moving to the browser (and then you also have to copy&paste back)
(Not the creator of Perseus, just a friend and fan of the project.)
I tend to agree. I’ve also partaken in some Rusty frontend experiments that were discontinued. Rust is years behind on the JS ecosystem’s development velocity for frontend. That said, frameworks like NextJS are becoming incredibly complex, and Rust excels at taming complexity.
We’re already seeing Rust making major inroads at the lower levels of the JS stack, e.g. Deno, swc, turbo, parcel, react-relay.., even 3.5% of the next.js repo!
Not a lot of Rust folks agree with me on this because they delight in using Rust-lang as a universally applicable tool for app-making, but I think frameworks such as Perseus will reach their true potential once they incorporate JS/TS as a last-mile scripting language. Basically like a C++ engine with C#/Lua scripting for game logic.
> We’re already seeing Rust making major inroads at the lower levels of the JS stack, e.g. Deno, swc, turbo, parcel, react-relay.., even 3.5% of the next.js repo!
There's a big difference between using it to write tooling, runtimes, etc. and using it to write application code (what GP was talking about). In the above examples, end users don't have to use a Rust workflow at all
I wouldn't think of Rust as a language for Front-end development or similar either, I rather think of it as a systems programming language. Yes, it may be able to imitate tools like React, but not replace them. Not even in the slightest.
If it’s a NextJS alternative then it supports client-side components too? Is client-side code written in Rust? Is it then compiled to JS or to WASM? Couldn’t find those answers from quick look at the website and GitHub page.
Great project! Tangential, but I've been thinking about this a lot lately and have an idea of what I'd like to see attempted next in this area.
Like many have said, the JS framework ecosystem (ok, while kind of a meme much of the time) is a total juggernaut.
I wouldn't mind seeing some kind of.. unholy combination of SSR for JS frameworks with a Rust (or even other language) backend if it was feasible.
JS is still a lot more mature and ergonomic for front-end. Rust is kind of a chore for this kind of work currently. It works, but wasn't designed for it and it feels a bit clumsy. On the flip side, Rust is a safe and super fast server language (Go a great choice as well, or others too).
I suppose I'd like to see some kind of framework that makes it easier to pair the two, instead of insisting that a single language isomorphic approach is best.
This might not be practical - I feel like routing might end up being a nightmare. I'm more just saying it seems harder to pair peanut butter with jelly than it should be and someone smarter than me could probably figure it out :). I'm naively guessing that as long as you had a JS interpreter on the server side, it could be made to work (no problem in Rust for either Node or Deno since embedding those is possible).
We've got thing like Next, SvelteKit, and Fresh. We've also got things like LiveView - there may be a further middleground between the two.
What problem are you solving with this “unholy combination”? Typescript gives you the relative safety. If you need faster compilation than tsc, you can use swc.
I've been working on LiveSvelte[0] which might answer this in the LiveView example. It integrates LiveView with Svelte and has SSR support by calling Node from within Elixir, I wouldn't call it unholy, it's quite nice :)
Routing is fully handled by Phoenix, and you can get quite fast page transitions with Live Navigation Events. It's just that whenever you need complex frontend state you can offload it to Svelte, while still maintaining that backend interopability, in this case with E2E reactivity.
What's also really nice is that LiveView and Svelte are both very declarative in the way they handle the 'view' layer. And so they map really well onto eachother.
When you are the only person leveraging stuff like this, it’s all good. The moment you try to bring in other people who don’t have much experience with Rust, you are done for. No matter what you think, people who are developing websites and people who prefer writing everything in Rust just don’t have much overlap.
I agree, I don't see much overlap either, and when you run into problems, these Rust frontend frameworks will certainly have lower userbases to ask for help from than for, say, a React app.
I'm building a web app with React (Next.js) on the frontend and Rust on the backend, no way I'd use a full Rust or full TypeScript stack.
I'm looking at the examples but it just looks like a lot more boilerplate than what I have when building a normal page in Nextjs.
But I also think their comparison table isn't very fair. You can easily have perfect lighthouse scores in Nextjs, and Nextjs supports both SSG and SSR in the same file... not sure why it says otherwise. https://nextjs.org/docs/basic-features/data-fetching/increme...
Yeah, if they cannot get even the "Hello World" example to be simple and straightforward, I don't know why anyone would switch from the status quo. It's a ton of added complexity.
I truly do not understand this Rust-for-all-problems mentality.
I was under the impression that Rust was essentially the next generation of C, meant for things like compilers, embedded systems, and other places where memory truly matters.
Why are we shoehorning it into things like REST APIs and frontend frameworks? One look at that syntax and I'd think anyone could see it's not the right tool for the job, even if there room for disruption in those spaces.
+1. I’m all for using the right tool for the job. It seems like all these technologies are to cater to specific niche of engineers and that’s the exact reason why they don’t have the longevity.
Can someone please explain how Rust and frontend converge so often?
In my mind Rust is similar to C++ including the very ugly syntax and fixation on memory management, while frontend languages are well.. exactly not that. Why?
I couldn't find it in the docs but does this also support working in a htmx like mode? Where the server renders pages and it does partial updates of parts? Or what some frameworks now call server components?
Or is everything more like a initial render on the server and pure wasm from that time on?
v0.4.0 went stable today after a year of development: https://github.com/framesurge/perseus/discussions/270
reply