Hear hear! I don't think the assertion that SPA's are faster to build / more scalable holds up to much scrutiny.
1. SPA's still require some back end to render the initial page / payload.
2. SPA's cannot be trusted, so you need to duplicate things like validations.
2a. "But you can just do an ajax call to validate records on the back end, and get a json response!" — how's that any more performant than just doing a traditional page load? With pjax or turbolinks, the performance difference between these options is even closer.
I absolutely agree with this. With turbolinks I get the fast page load feel of SPA with much less code and a far superior and maintainable architecture.
The idea of SPA, is to not have to send 'useless' data at every page change (like the HTML layout), but only the strict necessary data with a JSON payload. Especially coupled with good caching.
Also it offloads computation done on the server to clients (templating).
So generally yes SPA are heavier, but they are more powerful. It's a matter of tradeoffs.
The primary advantage of SPAs is the increased interactivity of the UI. Instant navigation is nice, but that's an implementation detail: an SPA could just as easily lazily load that data, and must chew up memory on the client and then deal with synchronization complexity with the back end if it doesn't.
It's a trade off, of course. SPAs are a reversion to the classic client-server setup, and there are advantages and disadvantages to both that and the newer web model.
One is that you still need to hydrate and format that HTML somehow
htmx expects HTML back from the server, there isn't any hydration or client side templating (unless you want that)
Second, you have to take on all the rendering load instead of just passing thru some JSON-formatted DB results,
Formatting database data into a JSON string is not significantly less CPU intensive than formatting it into an HTML string, and both are a round-off error when compared with the expense of connecting to the data store.
Final gripe...
Yes, an irony of my life is that I had to learn a lot of javascript in order to avoid writing much javascript.
It's all about the use case. If you're on an ecommerce website and click between category and PDP pages 5-10 times browsing, a SPA is going to make that transaction an order of magnitude faster. Since you're only loading the smallest possible payload (JSON) that contains only the things that have changed, it is optimized in a way that is never going to be achievable with server-side rendering.
> Practical SPA's have many more network roundtrips than the equivalent server-rendered web interface. Every AJAX request is an extra roundtrip, unless it can be handled in parallel with others in which case you're still dependent on the slowest request to complete.
This is largely solved with innovations like GraphQL (which you don't need a SPA to use). Pages that require multiple API calls can show their UIs progressively with appropriate loading indicators. For SPAs that have ~long sessions, it's arguably a good thing to have multiple API calls, because each can be cached individually: the fastest API call is the one you've already cached the response for. This is stuff we were doing at Mozilla in 2012, it's nothing new.
There's also nothing stopping you from making purpose-built endpoints that return all the information you need, too. Your proposed solution (SSR) is literally just that, but it returns HTML instead of structured data.
Surely if you can't produce an SPA with equivalent or better performance than a more traditional architecture then - don't build an SPA.
Or even better use a simpler solution that gives me 80% of the benefits of an SPA: Turbolinks, PJAX, intercooler.js or even a light sprinkling of good old AJAX.
Even the full page loads aren't necessary with Turbolinks or PJAX. If I had a choice, thats definitely what I'd be using for what I'm working on now at an ecommerce company. But the boss insists on microservices, and the once the advantages that come with a monolith are gone the SPA is just a better approach imo
Turbolinks and PJAX are really what a lot of folks wanted before the web development community at large was introduced to client-side routing through SPAs post-Knockout.js days.
Agreed. SPAs only make sense for few websites like Trello, for most other websites plain dynamic HTML with dash of ajax here and there are much better.
In the recent discussions about Discourse, it was said that on slower connections, it feels way snappier than a server-side solution would be.
Frankly, I'm a server-side guy, so I prefer that kind of development, but there are advantages to SPAs, and like any good engineer, you should use the right tool for the job. Rails is modular enough that you can do whatever you want, Turbolinks is just a gem, so if you don't want to use it, you can just pull it out of the Gemfile.
If be willing to bet that the less dynamic interaction per page, the less an SPA makes sense. ;)
That said, I can also see how something like Ember gives you a nice way to do API-first development, so even if your site is more static, it could make sense.
As always: It Depends. Don't believe anyone who tells you they have a silver bullet.
The key here is that if we consider equivalent good and robust implementations, equivalent capable teams, same UX, etc of an SPA and a traditional full stack MVC application with a modern ajax tool such as livewire, hotwire, etc the latter takes a fraction of the time and cost to build and the result is far less complex and easier to maintain.
I've worked in both kinds of environments, and unless you're building an offline first app, dogma, or Google maps...SPAs make absolutely no sense from the engineering point of view.
SPAs obviously have some positive sides, i.e. state management, but that goes with a price of increasing complexity, especially if server-side rendering is required.
In majority of cases, especially when building tools like back offices it's not what you want, you just want to be able to render forms, tables and save them in a convenient way.
I've found turbolinks + simulus.js combo to work surprisingly well. Actually, I use the only one stimulus.js controller in majority of cases - one that makes an ajax request and reloads the page (with the help of turbolinks) on success. If page load time is fast enough, the use experience is the same as if you would change the dom with js manually. Of course the requirement is that reloaded page reflects the changes.
Other stimulus.js controllers are there for the cases that don't fit into aforementioned pattern. That sounds primitive but can take you a really long way without turning your js into a a monster.
In addition to that you don't need to care about the routing, html validation just works, you can wrap existing html components and even inject react apps here and there if you really need to.
[EDIT] Forgot to say - this approach does not force any language or architecture on the server-side
I've seen this comment before when it comes to turbolinks and it is a pain but if that is what I'm giving up to avoid doubling my state with an SPA, writing a ton more code to create an api and then writing more to hook it up to speak to the backend, double error checking, losing access to my whole database when creating a page and having to create a ton of endpoints to get the data I need, etc... Then I'll take that trade.
I do think SPA's still have their usecase for when it comes to complex screens but I would like to see that screen only have the incremental complexity on it. Svelte would be my goto for that page at the moment.
It's a bit like what HEY done with their mobile app, most of the app is SSR HTML but the main inbox screen calls out to the JSON API with a native screen.
I say all this but I do love the component nature of frontend frameworks, it is nice to put things in boxes like that and have pulled in data change the view in reaction. But this new hotwire turbo frame idea sounds again much simpler where say we have a chart we want to update over time and display, in SPA world we would speak to some API change some context/store/component prop etc and let the rerender happen. With a turboframe we just make a call out to get the new updated HTML from an endpoint.
I would love some examples of people comparing and contrasting the exact same webpages + functionality with SSR + Hotwire vs SPA. Take this page for example https://nomadlist.com/ its just PHP SSR but on my initial viewing I would think this warrants a SPA.
About SPA the article says "(I don't buy sending tons of JavaScript to a browser will ever be as fast as just some damn HTML. Also it's not as easy as putting some HTML files on the Internets)" -- I don't think the author really understands what SPAs are and what their advantage are. It's definitely faster in the sense that the application may work offline and you don't have multiple requests asking for data. It's faster to retrieve it locally, than remotely.
No offense, but I think it's disingenous to compare the SPA movement with a guy with vague graphs that look like they suggest going back to what people used to do 5-10 years ago.
The SPA movement "threw out" the notion of ajaxing HTML snippets in favor of ajaxing structured data for many reasons: better separation of concerns, better asset cacheability, better defaults against XSS, better infrastructure for multi-client architectures, the list goes on. I'd argue that security w/ data endpoints is far easier to audit and reason about than the old school RPC-style send-me-html-when-I-do-X server interfaces.
I agree, and I'm more critical of heavy front end Javascript frameworks that are part and parcel of SPAs. Things like the Turbolinks hack you mention, IMO, provide the best user experience, especially fast initial page load. But that can be messy to develop, so we end up with Angular or React which smooth the development issues, but make pages fat and slow for users.
I have begrudgingly been pulled into SPAs as a developer, and at first I was very skeptical.
Now that I see what things like Angular can do, with lazy-loaded components, and only downloading JSON data and letting the client render the DOM, my pages are actually much faster and the UX is vastly better.
Yes, you need to download the Angular libs, but so many pages use them now they are likely cached, and are negligible in size for a fast connection.
After that, it's client-side routing and downloading mostly just downloading JSON from a REST API. You don't need to server to push a 5,000 row table with all the mark-up, you just grab that data and have the browser construct your table.
And yes, you can still copy/paste the URL. And save the page as HTML. And everything else you can do with a "non-SPA" page.
It’s not that simple: you need to factor in size and latency, too. If my SPA loads 2MB of JavaScript and then makes 50 API calls, it’s going to be a lot slower than the server sending 20kb of HTML in a single response.
JSON may or may not be smaller or faster: if you have to load data you don’t need or, worse, chase links it’ll be worse. GraphQL may help but that’s bringing it closer to server-side performance, not exceeding it.
Things which aren’t possible otherwise are the best argument for SPAs, but another approach is progressive enhancement: you can load quickly and then load features as needed rather than locking in a big up-front cost if all you need are real-time updates or push notifications. There’s a spectrum of capabilities here and there won’t be a single right answer for every project.
1. SPA's still require some back end to render the initial page / payload.
2. SPA's cannot be trusted, so you need to duplicate things like validations.
2a. "But you can just do an ajax call to validate records on the back end, and get a json response!" — how's that any more performant than just doing a traditional page load? With pjax or turbolinks, the performance difference between these options is even closer.
reply