For dynamic content it may make sense to offload computation to the client (although in the linked article the opposite was found to hold), but it really irks me when otherwise static pages are rendered again and again on the device of every visitor. Such a waste.
Early attempts were pretty ham-fisted and only really useful for backoffice web apps with captive users and LAN connections. The past few years have seen a lot of advancement in tools like webpack to streamline how much code gets served for a given page.
Well, what was likely happening is that the entire application was bundled into a giant JS file that was served up on every page regardless of how much of it you needed.
CSR bugs me most on mobile. What should be simple content, like a reddit page or a tweet, leaves me staring at a large pulsing logo for awhile as the whole app is loaded and content is rendered.
I get offloading some rendering (that server time adds up) but meet me half way. At least show me some content while the rest loads.
Hm, any thought on the performance/throughput consideration?
I'm not experienced with Node.JS, but something hogging the main event loop of an asynchronous server with a long computation sounds horrible. It'll mangle your throughput, and it'll create some really strange latency spikes for unrelated requests, because the slow computation blocked the event thread from delivering the response of another request. Is this not as much of a problem practically, or would you isolate that SSR code from REST code in different application instances?
In a threaded server like java application servers or go applications, that should be a non-issue since requests are handled in mostly independent threads. It might increase the necessary compute resources some, but that's expected when moving work to the server side.
Keep in mind that only rendering blocks, not the entire request. Your database calls and other stuff is still non blocking. The actual rendering time is measured in tens of ms in my experience, but then I’m not Walmart. But at the same time I haven’t seen any pages in Walmart that render for half second.
I think you are doing something wrong if your tender time is more than a few ms.
At what point do developers question whether they chose the right framework when their solution requires a large blocking call for many milliseconds in a single threaded server:
> SSR throughput of your server is significantly less than CSR throughput. For react in particular, the throughput impact is extremely large. ReactDOMServer.renderToString is a synchronous CPU bound call, which holds the event loop, which means the server will not be able to process any other request till ReactDOMServer.renderToString completes. Let’s say that it takes you 500ms to SSR your page, that means you can at most do at most 2 requests per second. BIG CONSIDERATION
That is the problem of Node.JS, not a problem of React. They can run multiple instanceof of Node.JS. And if it takes 500ms to render on a server with powerful CPU, it will take even more on the client, futhermore every page update can take that much time too.
I've always found that nearly all of the time spent when requesting a page of SSR'd content (whether that be a full page or a rendered component returned to an AJAX request) was spent in the processing before the render happens. Given that, the added complexity of CSR has never been worth it for me.
This is situational, I think. My work mostly involves applications that do significant back-end processing for most requests, and my SSR is always done using a framework that has pre-compiled code doing the rendering rather than an interpreted language. (Perl and C#.) This combination adds a lot of pre-render computing and optimized rendering, which adds up to SSR being a good choice.
I'm not sure what that says about when CSR would be a good choice. If your requests don't do much back-end processing, but still have a long (500ms?) response time, that seems like you're doing something wrong rather than an opportunity to use CSR. Maybe you've chosen a poorly-performing rendering framework. Maybe you're trying to render too-large a page (which would be even more of a problem client-side.)
That doesn't make sense, unless you're talking about a specific framework that has bad performance characteristics.
Let's say you've got a request that does SSR and takes 500ms, with 450ms of that time spent processing the request and 50ms spent rendering the response. If you switch to CSR, you still have to wait 450ms to process the request, and you've got to serialize the response data (eg: render it to a more concise format than html) which is going to take some of that 50ms you're trying to save. So, where is the blocking you're talking about? How does CSR make it go away?
What you wrote sounds like you're describing a singleton that handles all rendering for all requests, and can only handle one request at a time. If that's the case, your framework is a toy and you need to ditch it for something that can handle multiple concurrent requests independently of each other.
I don't really see what are the benefits of using React in an internet shop. SPA are good for interactive and complicated interfaces, but an internet shop is mostly a set of static pages - product lists, search and product pages. I hope they don't use React for the static part of the page and only use it for rendering the cart and other interactive parts.
I tried—I really did try—to use server-side rendering for every project I've worked on. I find it really unpleasant when I hit a site, like a blog or something, which has almost entirely static content and yet it still rendering it locally. The experience is objectively worse, and I had no interest in making that the case for my users.
But there's a middle ground in practice. Public-facing website that users might hit from a search engine? Obviously render it on the server, and if required progressively enhance it. Something more akin to a web application? Server-side rendering makes everything slower and more complex without benefiting any users. If you aren't using Node on the server, it's even more complex.
I really hate that this is the case. I had this firm idea in my head – "every URL is an HTML page, and users should be able to take that URL and request it using Curl or whatever, and see the content of the page". In practice, we were developing a highly-interactive, domain-specific application that relied heavily on client-side scripting to be realistically useful for users. There were no users without JS that I was able to find. We were doing a huge amount of work to follow a rule of progressive enhancement, that in practice slowed down the experience for users and, benefited nobody, and made development ~ an order of magnitude harder.
Can you elaborate? My understanding is just having the initial version of your components server-sided rendered, not all of it. Then JS takes over when it's loaded. It shouldn't change much to your existing code.
Yeah, in theory it's pretty simple. If you're using Javascript on both the server and client, it's definitely easier. But I guess I'm conflating server-side rendering with "pages that don't require Javascript to work", which is definitely more work.
I'm sorry if I wasn't clear, but I don't think it's an argument without merit.
I mean that if you want to build a UI in the manner described in the article—where you essentially use a JS UI framework to render the application server-side, and subsequently make it interactive on the client side—then doing so without the server being Javascript is more complex.
I have direct experience of doing this – I built a complex web application that used this approach from within Rails. Essentially this replaced Rails built-in template renderer with a JS environment running in V8. It was pretty cool, in that it meant writing templates once and having them server-side rendered but also interactive on the client side – but it added a huge amount of complexity.
Is there something I was babbling about that I missed?
It allows you to use the same tools to render the user interface on both the server and the client. For example: write a UI using React components, render a static version on the server, and use those same React components running in the browser to render an SPA using the statically-rendered template as a seed.
The options if you are not using javascript on the server are less appealing - calling out to a JS interpreter to do your templating, or building duplicate templates for
server and client side, or trying to sprinkle interactive client magic on the page after it’s been loaded.
They’re all possible, but they are more complex than “one set of templates rendered everywhere” - what people were calling “isomorphic” I suppose, though it’s a silly term.
Have I clarified? You seem unusually upset about what I thought was a relatively minor aspect of my initial post.
unusually upset? I've written like, two sentences in this thread. But apparently it was enough to get downvoted & flagged. I'm allergic to bullshit, but not to discourse.
Your initial post sounded like nonsense to me - like you were arbitrarily elevating javascript for no particular reason. But now that you've explained that you're talking in context SSR I understand. Maybe that was inferred and I just missed it - even if thats what this whole HN thread is about. I know I was thinking in terms of JSON responses and couldn't understand why one would be so much better than the other.
So, you're right. you do have to do additional work, when JS isn't your server, when your pages are rendered with JS and you want to pre-render pages.
But I wonder - is it worth it to move your entire backend into JS just to do that for SSR? Would not calling out to a JS interpreter be the more... SOA approach, or even the more pragmatic one? Or is it exponentially more difficult to do that than to move everything into node/JS?
I wonder why the article doesn't mention renderToNodeStream which is new to React 16? That allows streaming the rendering to the browser. Still blocking for that thread, but probably a better UX than renderToString.
Well, it of course depends on what your content is, but you can still use a cache service before.
I installed an infrastructure for an e-commerce for a client. They use Magento. I installed a Varnish in front of it and it goes really faster than without. Most pages are, in fact, static pages (home, categories or products), the dynamic part are ajax loaded after content rendering (like prices or stock) so it's a blast for UX.
Of course it will make no (well, less) sense for a SPA. But for content websites, I think it's more logical to do SSR than force to load the whole JS files to see what's on the page.
> SSR throughput of your server is significantly less than CSR throughput. For react in particular, the throughput impact is extremely large. ReactDOMServer.renderToString is a synchronous CPU bound call, which holds the event loop, which means the server will not be able to process any other request till ReactDOMServer.renderToString completes. Let’s say that it takes you 500ms to SSR your page, that means you can at most do at most 2 requests per second. BIG CONSIDERATION
If your `renderToString` is taking 500ms you should really consider using `renderToNodeStream` [1]. It will significantly reduce TTFB and runs asynchronously, letting your Node.js server handle more incoming requests. This [2] blog post goes into more detail.
Also, if you don't want to use streams, Rapscallion [3] provides an asynchronous alternative to `renderToString`.
What baffles me is that this is a long time. In that time, you've got about a billion CPU instruction executions. If you include GPUs, they can render multiple frames of complex scenes to million+ pixel buffers in that time. And people are having trouble rendering a string?
Hm, I'd guess, it's latency to backend servers. Cores are plenty fast for just about every language.
If you have a good network between systems, each request of any form to a backend service will introduce at least 0.5 ms of response time due to datacenter RTA, so once you hit 250 queries, that's that. And 250 queries are just one n+1 problem somewhere.
Not sure if the benchmarks still hold now that React 16 is out, but there's definitely overhead in the general approach.
> HTTP can only send strings/bytes, so it makes sense to start there and skip the overhead of building up a virtual DOM representation first. There’s a reason string based templating languages are so much faster than our modern frameworks using a virtual DOM implementation.
Ah. I was wondering about that. Running multiple nodejs instances won't fix the impact of long-running synchronous methods in the event loop on other requests processed in the same instance. It'll just reduce impact, because you reduce the relative percentage of other requests being held hostage by a bad request.
Unless you can use your loadbalancing to split requests with a risk of long-running synchronously tasks away from instances doing fast work. We're doing something like this - we got some really ugly legacy code around, which tends to abuse and overuse the application servers JDBC pool regardless of how we tune it. This causes every other request on that instance to crash and burn. This becomes even more fun if a customer retries the same request 20x and the load-balancer does round-robin. So we route the bad requests to a single martyr instance and everything works fine again except for this broken old code.
reply