I am mostly thinking about personalization (AB tests, i18n, marketing segment, multi-tenancy belongs to this family of use cases), it's often assumed that you can't achieve personalization with static rendering, and that's just not true: a simple (read very fast, very cheap, optimized for this use case, like CDN are supposed to be) URL rewriting server can point to the right statically rendered page at request time. Another approach is modifying the page at the edge, Eleventy does that. All this involve being aware that your static content will be accessed using an HTTP request, taking that into account may lead to a more unified (not squashed though, just unified) SSR/SSG architecture similar to Next 13 one. You can look for Plasmic take on AB tests with Next or my own work on Segmented Rendering or the "megaparam" pattern.
There is always a server, CDN are servers, so even a totally static websites exported as HTML pages can benefit for user-level or segment-level personalization if the host has enough features. A simple URL rewriting layer is the only thing you need. Your mental model is consistent with what I see from Perseus, but I think that Next is already one step ahead, and this step is to understand that there is a always a server and a user request around even when you render statically. This was my initial point, probably poorly phrased.
Why is it a mistake to separate them? If can generate my site as a bunch of static content, upload it to a server, and have it served by nginx or similar, then I don't need to think about the next.js App Router, React, or indeed any custom server-side code. That's what static site generation is for, and it seems rather distinct from what you are talking about.
Yes it is limiting, and it won't be appropriate for many use cases. But if you can do it, it makes things simpler, cheaper and more efficient.
That's not to say you can't combine SSG tooling with dynamic content too if you need it. But static-only is a perfectly valid choice if you can get away with it.
I call this hybrid static websites. It allows you to statially generate pages for even people who are logged in. It means the generation step is powered by interaction with the static site. I use a special api.subdomain.tld proxy_pass to a server to trigger the site generation.
I wrote a forum that regenerated the static files when a post is added. Including the user's own account pages, those were statically generated too.
In theory a change to a user's account should only change that user's account pages.
I wrote a medium article about it. I was trying to get as many requests per second while logged in. It's an extension of the cache invalidation problem.
I got to 3288 requests per second on a virtual machine. A static site should get at least 12,000 requests per second. For reference, Visa and Mastercard process 40,000 requests in total per second for financial transactions. I use an nginx authenticator and Rust authentication program that talks to Redis to check the IP address of a user's session token in the URL. You could probbaly improve the performance further by using an embedded Lua script in Nginx and using that to authenticate IP address rather than doing a HTTP request for each request that the nginx authenticator does.
You can see the static site as a cache that needs to be updated based on new inputs.
No I was saying the opposite, a static site is necessarily not dynamic.
I would not want to pre-generate those pages. Your static files on an S3 bucket are still served by a webserver.
Anyway, can you honestly say, hand over heart, that server side rendered JSX or React is a robust and simple solution comparable to SSI or I don't know, a bit of copy pasting? Of course not. Server side rendering is a complex toolchain that will rapidly and constantly deprecate. My blog has been sitting on the same "stack" for 13 years, takes me 5 seconds to set up a new post.
I have worked on countless JS projects, sometimes they don't even build just 6 months later without faffing about with packages and versions.
If you're running a mostly-public content site that depends largely on search engine traffic, static is probably not the way to go (yet).
If your application lives mostly inside of a login, there's little reason to force yourself to render HTML from the server rather than building reusable APIs that can be shared across web, mobile, etc.
Usually people visit multiple pages? I think this is a 90-10 situation. 90% of the sites I visit, I reach a web page through another website, either a aggregator or a search engine, and I almost never navigate that website. Those could be purely static for all I care.
The remaining 10%, where I stay 90% of the time, could benefit from a mostly static interface with a tiny amount of server-side rendering.
Hi, I'm working on a static renderer for the c2 source json, AMA.
I'll be hosting a mirror (I think the license allows for that but I'll have to check) and then integrating it with the one hosted on the original domain.
I also kind of want yo set up some kind of edge caching static rendering with loose constraints on how up-to-date pages need to be (because the point is providing a read-only Lynx/CURL experience), but I think this might end up being too much work technically or politically.
I do use a similar approach with html/css/javascript + json in a recent app, though the static content and the json data are served from the same server. With aggressive cache serving the static content is not a problem.
Latency issues for the round trip are a different topic. Static means unchanging. In this context, pre-rendered pages stored in HTML files. CDNs may inject dynamic content into the page or filter the request, but this is a digression.
I quite like the idea of keeping the website as static as possible - it reduces the attack surface and plays nice with CDNs and you only do dynamic stuff ad-hoc...
We are B2B at the scale of 1-10k users. There is no "heavy lifting" to do in our contexts of use. Serving static assets along with the SSR content is totally acceptable. They aren't even served as separate files. We inline everything into 1 final HTML payload always.
Our largest assets are SVG logos that our clients bring to the party. We do work with dynamic PDF documents, but these are uncachable for a number of reasons.
Hypothetically, if we had millions of users and static asset IO was actually causing trouble, we'd consider using a CDN or other relevant technology.
I didn't consider small static sites. I agree with that and have done some landing pages that way that stand in front of SPAs. I guess I was commenting on OP wanting to use Rails or Spring MVC for views instead of only using them for APIs.
I'm not sure what the term for it is, monolith maybe.
reply