I won't say this is a step backwards in terms of progress but it is a step back to the way things used to be done in the early days of Ajax - albeit with a slightly more modern approach.
It is appropriate for some applications and not for others. That's just my view.
AjaxContentPanels or something to that effect. Those things were a nightmare. At the time, asp.net pretended to be stateful by bundling up the entire state of the page into "ViewState" and passing it back and forth client to server. Getting that to work with those panels was more work than just ajax-ing the content and injecting it with jquery.
In the Microsoft-verse, this might also draw some comparisons to the more modern server-side blazor.
Oh yeah. I remember that ViewState could reach 100s of KBs on a page if you weren't careful. It was a huge juggling act between keeping state in your input fields vs ViewState.
My thought exactly, though I fully support that. I often rant about how the modern web is billions of layers of duck tape over duck tape and it has become an unmanageable mess of libraries, frameworks, resources, all while javascript remains the most outrageous and absurd language ever created. I'm by no means a fan of rails or ruby for that matter but I think things like these are a considerably better alternative than all the ridiculous libraries and frameworks everyone uses, which result megabytes of javascript and require corporate-grade bandwidth and at least an 8-th gen i7 and at least 8gb of memory to open. And all that to open a website which has 3 images and a contact form. I mean someone should create a package that analyzes websites and creates a minimum requirements manifest. It's good to see that there are people who are trying to bring some sanity.
Preach! Websites don’t seem all that much better to me than they did 10 years ago [^fn], so what are we gaining with all these much more complex and fragile tools?
[fn]: Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.
> Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.
We can blame this on the MBA types. I've literally never heard a software engineer say "hey, let's make this pop-up after they've already been looking at the page for a minute!" or anything like it.
Unfortunately I have to disagree - if there weren't any engineers around to implement the dark patterns they wouldn't be as prevalent. Maybe this calls for an equivalent of the Hippocratic Oath but in the tech world?
But there is suprisingly little layers on layers. Part of what has been amazing about the web is that the target remains the same. There is the DOM. Everyone is trying different ways to build & update the DOM.
Agreed that there are better alternatives than a lot of what is out there. We seem to be in a mass consolidation, focusing around a couple very popular systems. I am glad to see folks like Github presenting some of the better alternatives, such as their Catalyst tools[1] which speed up (developer-wise & page-wise (via "Actions") both) & give some patterns for building WebComponents.
The web has been unimaginably stable a platform for building things, has retained it's spirit while allowing hundreds of different architectures for how things get built. Yes, we can make a mess of our architectures. Yes, humanity can over-consume resources. But we can also, often, do it right, and we can learn & evolve, as we have done so, over the past 30 years we've had with the web.
If by surprisingly little, you mean 4 pages and 500mb of requirements for a "hello world" project with the "modern" web, then yes. The DOM has always been a mess, much like javascript. And the fact that no one has tried to do something about it contributes to the mountains of duckt tape. It was bad enough when angular showed up, but when all the other mumbo jumbo showed up like react, vue, webpack and whatnot is when it all went south. I refuse to offend compilers and call this "compiling", but the fact that npm takes the same amount of time to "compile" it's gibberish as rustc to compile a large project(with the painfully slow compilation that comes with rust by design), is a clear indication that something is utterly wrong.
> If by surprisingly little, you mean 4 pages and 500mb of requirements for a "hello world" project with the "modern" web, then yes.
React is well under 20k.
FWIW when optimizing my SPA, my largest "oops" in regards to size were an unoptimized header image, and improperly specified web fonts.
There are some bloated Javascript libraries out there, yes. But if you dig into them you will often find that they are bloated because someone pulled in a bunch of binary (or SVG...) assets.
Ah react, the biggest crap of them all. A 12 year old with basic programming skills is definitely capable of designing a better "framework". Yes, everything frontend is quoted because it's nothing more than a joke at this point. Back to react, and ignoring all the underlying problems coming from the pile of crap that is js, let's kick things off with jsx. The fact that someone developed their own syntax(I'm not sure if I should call it syntax or markup or something else) makes it idiotic: It's another step added to the gibberish generation. It's full of esoteric patterns and life cycles which don't exist anywhere in the real cs world. React alone provides little to nothing so again you need to add another bucket of 1000 packages to make it work. Compare it to a solid backend framework that isn't js: all the ones I've ever used come with batteries included. The concept of components and the idiotic life cycles turn your codebase into an unmanageable pile of of callbacks and sooner rather than later you have no clue what's coming from where. Going into the size, the simple counter example on the react page is 400kb. Do I need to explain how much stuff can be packed in 400kb? For comparison, I had tetris on my i486 in the early 90's which was less than 100kb, chess was a little over 150kb. Christ, there was a post here on HN about a guy who packed a fully executable snake game into a qr code.
> Ah react, the biggest crap of them all. A 12 year old with basic programming skills is definitely capable of designing a better "framework".
You're literally a stereotypical Hacker News commenter.
I also find the modern frontend a bit too complicated but this is just an unreasonable statement.
Of all the problems I have with React, and I do have a few, JSX is not one of them.
If you are going to be using a language to generate HTML, you are either going with a component approach that wraps HTML in some object library that then spits out HTML, or you are stuck with a templating language of some sort. (Or string concatenation, but I refuse to consider that a valid choice for non-trivial use cases.)
JSX is a minimal templating language on top of HTML. Do I think effects are weird and am I very annoyed at how they are declaration order dependent? Yup. But the lifecycle stuff is not that weird, or at least the latest revision of it isn't (earlier editions... eh...). The idea of triggering an action when a page is done loading has been around for a very long time, and that maps rather well to JSX's lifecycle events.
> React alone provides little to nothing
Throw in a routing library, and you are pretty much done.
Now another issue I do have is that people think React optimizes things that it in fact does not, so components end up being re-rendered again and again. Throw Redux in there and it is easy to have 100ms latency per key press. Super easy to do, and avoiding that pitfall involves understanding quite a few topics, which is unfortunate. The default path shouldn't lead to bad performance.
> The concept of components and the idiotic life cycles
Page loads, network request is made. Before React people had listeners on DOM and Window events instead, no different.
Components are nice if kept short and sweet. "This bit of HTML shows an image and its description" is useful.
> Do I need to explain how much stuff can be packed in 400kb?
No, I've worked on embedded systems, I realize how much of a gigantic waste everything web is. But making tight and small React apps is perfectly possible.
And yes, if you pull in a giant UI component library things will balloon in size. It is a common beginner mistake, I made it myself when I first started out. Then I realized it is easier for me to just write whatever small set of components I need myself, and I dropped 60% of my bundle app size.
In comparison, doing shit on the backend involves:
1. Writing logic in one language that will generate HTML and Javascript
2. Debugging the HTML and Javascript generated in #1.
And then someone goes "hey you know what's a great idea? Let's put state on the back end again! And we'll wrap it up behind a bunch of abstractions so engineers can pretend it actually isn't on the back end!"
History repeats itself and all that.
SPAs exist for a reason. They are easier to develop and easier to think about. And like it or not, even trivial client side functionality, such as a date picker, requires Javascript (see: https://caniuse.com/input-datetime).
SPAs, once loaded, can be very fast and scaling the backend for an SPA is a much easier engineering task (not trivial, but easier than per user state).
Is all of web dev a dumpster fire? Of course it is. A 16 year old with VB6 back in 1999 was 10x more productive than the world's most amazing web front end developer now days. Give said 16yr old a copy of Access and they could replace 90% of modern day internally developed CRUD apps at a fraction of the cost. (Except mobile support and all that...)
But React isn't the source of the problem, or even a particularly bad bit of code.
jsx is a retarded idea because it adds an abstraction over something brutally simple(html). Abstractions are good when you are trying to make something complex user-friendly and simple.
> Throw in a routing library, and you are pretty much done.
Ok routing library, now make an http request please without involving more dependencies....
> Throw Redux in
See, exactly what I said: we are getting to the endless pages of dependencies.
> 100ms latency per key press
100ms latency??!?!?!? In my world 100ms are centuries.
> 1. Writing logic in one language that will generate HTML and Javascript 2. Debugging the HTML and Javascript generated in #1.
I don't have a problem with that. At the end of the day you know exactly what you want to achieve and what the output should be, whereas react it's a guessing game each time. We are at a point where web "developers" wouldn't be able to tell you what html is. With server-side rendering, from maintenance perspective you have the luxury to use grep and not rely on post-market add ons, plugins and ide's in order to find and change the class of a span.
The term SPA first came to my attention when I was in university over 10 years ago. My immediate thought was "this is retarded". Over a decade later, my opinion hasn't changed.
> 100ms latency??!?!?!? In my world 100ms are centuries
Yup that's crappy. The ease of it happening, the Work At A Startup page used to have this issue (may still, haven't looked lately) shows that it isn't hard to make accidentally happen.
As I said, it is a weakness of the system.
> sx is a retarded idea because it adds an abstraction over something brutally simple(html)
Have you seen how minimal of an abstraction jsx is? It is a simple rewrite to a JS function that spits out HTML, but JSX is super nice to write and more grep-able than the majority of other templating systems.
I have a predisposition to not liking templating systems, but JSX is the best part of React.
Notably it doesn't invent it's own control flow language, unlike most competitors in this space.
> My immediate thought was "this is retarded".
Well the most famous SPA is gmail and it's rather popular, you may have heard of it. It is bloated now, but when it first debuted it was really good. Webmail sucked, then suddenly it didn't.
Google maps. Outlook web client. Pandora. Online chat rooms, In browser video chat, (now with cool positional sound!)
SPA just means you are just fetching the minimum needed data from the server to fulfill the user's request, instead of refetching the entire DOM.
They are inherently an optimization.
Non-SPAs can be slow bloated messes as well, e.g. the Expedia site.
appreciated your previous tweet but I am falling asleep here & none of your rebuttals feel like they address the topics raised in a genuine/direct fashion. so many stronger points to make against this
your posting is tribalistic & cruel & demeaning, it attacks and attacks and attacks. this is so hard to grapple with, so so aggressive & merciless & disrespectful. I beg you to reassess yourself. don't make people wade through such mudslinging. please. there's so few better ideas anywhere, & so much heaped up, so much muck you rake us take through. please don't keep doing this horrible negative thing. it's so unjust & so brutally harsh.
> jsx is a retarded idea because it adds an abstraction over something brutally simple(html).
what programming languages do it better?
one of reacts greatest boons, it's greatest innovations, in my mind, is that it gave up the cargo cult special purpose templating languages that we had for almost two decades assumed we needed. it brought the key sensibility of php to javascript: that there was no need, no gain, by treating html as something special. it should be dealt with in the language, in the code.
if you have other places that have done a good job of being ripe for building html directly, without intermediation, as you seem to be a proponent of, let me/us know. jax seems intimately closer to me to what you purport to ask for than almost any other language that has come before! your words are a vexing contradiction.
> Ok routing library, now make an http request please without involving more dependencies....
please stop being TERRIFIED of code. many routing libraries with dependencies aren tiny. stop panicking that there is code. react router v6 for example is 2.9kb. why so afraid bro?
this is actually why the web is good. because there are many many many problems, but they are decoupled and a 2kB library builds a wonderful magical consistent & complete happy environment that proposes a good way of tackling the issues. you have to bring some architecture in but anyone can invent that architecture, the web platform is in opinionated ("principle of least power" x10,000,000), and the solutions tend towards tiny.
The only thing I'm openly disrespecting beyond the unholy mess that is the web of 21-st century, is react and potentially it's designers(I was raised not to be offended but if they are, good riddance). The web is in the worst shape it's ever been. I'm not terrified of code, I've been writing code for over 20 years. I hate antipatterns and spaghetti code, which is what the modern web is, top to bottom, frameworks and libraries included. The main idea behind javascript was to give small and basic interactivity which is entirely self-contained. Is it now? The fact that the modern web relies on tons of known and unknown projects, complex ci/cd pipelines and gigabytes of my hard drive is a clear indication that the js community messed it up. Very similar situation as php, which was intended to be a templating engine (and for that purpose it is brilliant) but just like js, it was turned into Frankenstein's monster(still to a lesser degree). And don't get me started on the endless security issues npm poses. I'm blown away by the fact that those are exploited so little - 15 year old me would have been in heaven if given those opportunities, along with half of my classmates. I seriously wonder what teenagers do these days.
Surely your point could be made better without the hyperbole?
"Most outrageous and absurd language ever," "Megabytes of javascript", "corporate-grade bandwidth", "8th-gen i7 and 8GB of memory" to open "3 images and a contact form."
I'm sure you can find one or two poorly-optimized sites that have 2MB of javascript to download, but it's by no means the necessary outcome of using "ridiculous libraries and frameworks," and not even a particularly common one.
> it's by no means the necessary outcome of using "ridiculous libraries and frameworks," and not even a particularly common one
The real world disagrees with you; go check out any major website and observe as your laptop's fans spin up.
However I think the main problem here isn't the symptom (websites are bloated) but the root cause of the problem. I'm not sure if it's resume-driven-development by front-end developers or that they genuinely lost the skill of pure CSS & HTML but everyone seems to be pushing for React or some kind of SPA framework even when the entire website only needs a handful of pages with no dynamic content.
Interestingly, waterproof fabric-based tape was originally called "duck tape" (for its waterproof quality). The same kind of tape was later also called duct tape, but it's actually pretty terrible for ducts. You want to use the all-aluminum tape for ducts. https://www.mentalfloss.com/article/52151/it-duck-tape-or-du...
"Duck tape" originally referred to tape made from duck cloth. They started using it for duct work, and began to call it "duct tape", to the point where "duck" fell out of common use and was able to be trademarked. They've also stopped using it for ducts.
So call it either one and people will know what you're talking about.
Check out Gaff Tape as a replacement for the duct-tape at home use-case.
And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.
> And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.
This strongly depends on the type of duct. Flex ducts that are a plastic skin over a wire coil don't work so well with aluminum tape.
it's funny how culture shapes our designations of the same thing. In (specially us) english it's duct tape as the product is primarily known for installation work, and often referred to as "duck tape" for it's well known brand, whereas in (german speaking) europe we also employ an English word for it, but call it "gaffer tape", for it's usage by light operators in the event business (so called gaffers)
Gaffers tape is a very different kind of tape similar only in appearance. It’s got a soft cotton content and is used to keep wires taped to things like rugs seamlessly so that people walking around your studio/tv-station/theatre etc don’t accidentally trip on it, potentially damaging the extremely expensive attached equipment in the process. If you’ve ever tried using “duct tape” as gaffers tape, you’d have a bad time, as it wouldn’t be that great at keeping wires down AND it’s likely to leave a residual adhesive on the floor when you take it off.
Duck tape is the stuff developed for the US army to waterproof things closed. Post-war, they made it silver instead of green and marketed it for use with ducts (since being waterproof made it also SEEM like a good candidate for the job in heating systems), but its pretty terrible for this purpose since temperature changes degrade the adhesive rapidly.
The tape you actually want to use for ducts is foil-backed tape.
In short, it was and still is a great marketing gimmick, but Duck Tape was only ever “ok” at keeping things water proof and only looks like gaffers tape or the tape you want to use on ducts.
No, I'm talking about js as a whole. Standard library is crap, inconsistent, even the most basic of naming conventions are not followed anywhere. The fact that the standard library jumps between camel case, pascal case, snake case and unicase at random is a perfect example. The list of absurdities is beyond ridiculous[1].
Who on this thread was saying what's old is new? 100% of what you said has also been lobbed at PHP recently and I've heard similar complaint about PERL and MS-SQL (that I remember well) and I'm sure others (one of those Delphi product too)
What's been strange to me though is I've heard JS advocates lobbing those criticisms at PHP, making the case for, say, why 'Node is awesome, PHP sucks'. Conflating a framework vs a language, then pointing out PHP 'issues' that also exist in JS... there's generally little point in trying to engage/correct at that point (context: primarily conference hallway conversations and meetup groups back when those actually happened).
JS is the new PHP. Part of the problem with massive popularity is it attracts also the lower ability devs and the ecosystem slowly degrades due to this. This cascades.
Glad I'm not the only one seeing that parallel. I'd be hesitant to use this for that reason, but maybe that's bias on my part? Just seems like you'd get stuck in a similar mess of "special" updatepanels aka hotwire frames that are trying to "save you from having to write javascript". Except it still uses javascript under the covers, so you still have whatever issues that may entail, only now it's more removed from the developer to be able to solve.
Interesting bit of history (and yes, I see the /s), XMLHttpRequest was actually invented by Microsoft in Explorer because the Outlook team needed better responsiveness for the web email client.
Is this that web framework from Microsoft that hid the transaction-orientedness of HTTP from you by letting you set server-side click listeners on buttons and generated all the code needed to glue it all together? At the time, I didn't feel good about it because it abstracted away too much, and required Windows on the server. Little did I know about all the ways people would start abusing JS in 10 years.
Turbo uses Ajax for most of its functionality too. The WebSockets bit is an optional add-on if you want to push updates from the server e.g. for notifications.
I get the feeling it will draw a bit of criticism from this crowd. IMO it’s fine to experiment with tech like this, but I have to wonder where sending HTML fragments over Websockets instead of HTTP falls apart.
Curious to hear about the success/horror stories in a few months/years from any adopters :)
That's one of the reasons SPA aren't going anywhere: trying to handle all the user microinteractions on the server is great when you leave next to it, but it falls apart when your users are spread across the world, especially if it's not western-ish world with fast internet connections.
JS interacting locally can make a web page be perceived to be faster, even if there's no meaningful content yet. Humans enjoy working with systems that so _something_ as soon as they interact with them and a placeholder/spinner/animation is that exact something. Of course, you don't want any placeholders or animations to go on for too long, but it's good to know that the button you just pressed actually did something and the screen hasn't hung.
Minimal JS pages often have very little direct interaction built in, so they feel blazing fast when the connection is good and the payload is small (HN is a great example) but terribly slow if your internet connection is bad and it's not masking the network delays (again, HN is a great example).
The 2-5s mentioned on Twitter are something else though, perhaps the Hey app is being transferred through a saturated connection in the US or something.
Well, a spinner can be added even into non-SPA applications. So I don’t see how that is an argument for SPA.
I presume you can lie to user by pretending that their change was instantly submitted, while syncing in background. In that case, yes, SPA all the way.
Optimistic updates are great for inconsequential stuff like an HN upvote. The user likely doesn't even want to see an error message if their upvote timed out.
But it definitely is "lying to the user", or better phrased "breaking user expectations". It's just that the user isn't likely to care except for important actions.
For example, imagine applying optimistic update to sending an email. The user would expect to be able to close their laptop after sending an email and seeing the UI transition. It would be catastrophic if the email was never actually sent.
Optimistic updates are best relegated to micro-polish imo. Frankly, I think it's overused especially in combination with autosave UI.
Gmail does this all the time, and it makes me mad. I just sort out some mails to have my empty inbox, and then close the browser tab. And it says that the changes haven't been saved, so I need to deal with the popup, and then close the browser tab again...
An app with longer usage leans towards SPA. Quick or sporadical stuff, shouldn't be a SPA.
The problem is always doing, SPA all the things or Do everything in the backend like if it's 2005
It’s not the entire HTML document, it’s piecemeal. As for sizing equivalent data in HTML vs js, I remember reading an investigation that XML and JSON are generally closer than you would expect and that for some data structures, XML is terser.
It's not XML though, it's HTML. HTML with additional nodes for layout, class names, etc.
If you're not invalidating the cached code multiple times per day (which, in fairness, a lot of the stuff from Google, Facebook, etc probably are), an SPA should use less data over time than a comparable server rendered app. The initial outlay is bigger, but once it's cached you only need the raw data as you navigate around.
Hey approach needs server on at least 2 regions to cover 90% of worldwide usage. I guess they only have server in the U.S. and some users may experience some lacking. I'd say having SFO3,FRA and NRT are pretty much enough for global customer.
I've used Hey daily since it launched on desktop and mobile and I can't see it being slow in any way. I guess you could count that it shows a loading symbol for a file to appear (on the first load only).. but that would be excessive nitpicking.
It never performed slowly for me. It ended up having none of the features I wanted, but that’s a different concern altogether. Definitely rode the hype train and feel dumb about it now haha
It is.
But he also wrote this in a subsequent tweet:
> For a typically long running app like an email client, the obsession with shipping as little JavaScript as possible is actually detrimental to the end UX.
Hey probably is the worst example for this approach, even though it works and it's impressive what they were able to achieve. It should be much better for most website out there that aren't trying to be a low latency app you run 24 hours a day.
Nope. And I will tell you why. To read my email I need the app to load. Gmail did not load for me in several countries and takes forever on a train wifi for example.
I never leave an email tab open, but even if I did, I first need it to load...
It's great when people do use an automatic fallback, and we've had reliable methods for that since the 1990s. I've seen WebSockets webapps where they don't bother with a fallback, though.
I'll just take a break for 3 years until SPAs become cool again. But they won't be called SPAs anymore... they'll be called "microapps". Elixir and Rust will be "just not fast enough" and there will be a new language VM that will "promise Hyperspeed™ on Google Crystalline™ chips". JSON will be considered harmful, and XML will be in favor again because it is flexible and there's plenty of bandwidth for all the extra bytes.
This isn’t new at all. This is version 7 of Turbolinks that has been renamed to Turbo and has some added versions. It has been what the Ruby on Rails and it’s creator have promoted for years and years.
The new thing about it are the name and associating it with Stimulus and some features, not the general concept.
Oh wow that is not clear at all. The old website of StimulusJS doesn’t redirect to its new home, and the old GitHub repo for turbolinks (not turbolinks-classic) still exists.
I had to look for it too. It isn’t clear and should’ve been. I’d like to know it is an enhanced renamed version of something I know rather than dismissing it because I can’t find time to learn something new.
I wonder how recruiting companies will continue selling frontend react rockstars and backend nodejs warriors who can write endpoints. And yes, that's how they differentiate between frontend and backend, because knowing how to format your json urls (which is not real REST) is backend work and react is frontend.
> How does Hotwire compare to Phoenix LiveView? It seems the same to me.
It's much different based on a preliminary reading of Hotwire's docs.
Live View uses websockets for everything. If you want to update a tiny text label in some HTML, it uses websockets to push the diff of the content that changed. However you could use LV in a way that replaces Hotwire Turbo Drive, which is aimed at page transitions, such as going from a blog index page to contact form. This way you get the benefits of not having to re-parse the <head> along with all of your CSS / JS. However LV will send those massive diffs over websockets.
Hotwire Turbo Drive replaces Tubolinks 5, and it uses HTTP to transfer the content. It also has new functionality (Hotwire Turbo Frames) to do partial page updates too instead of swapping the whole body like Turbolinks 5 used to do. Websockets is only used when you want to broadcast the changes to everyone connected and that's where Hotwire Turbo Streams comes in.
IMO that is a much better approach than Live View, because now only websockets get used for broadcast-like actions instead of using it to render your entire page of content if you're using LV to handle page transitions. IMO the trade off of throwing away everything we know and can leverage from HTTP to "websocket all the things" isn't one worth making. Websockets should be used when they need to, which is exactly what Hotwire does.
I could be wrong of course but after reading the docs I'm about 95% sure that is an accurate assessment. If I'm wrong please correct me!
> Fwiw, you can use long polling for LiveView if you wanted.
How does that work for page transitions? The docs don't mention anything about this or how to configure it.
With Turbolinks or Hotwire Turbo Drive, the user clicks the link to initiate a page transition and then the body of the page is swapped with the new content being served over HTTP. With Turbo Frames the same thing happens except it's only a designated area of the page. In both cases there's no need to poll because the user controls the manual trigger of that event.
How would LV do the same thing over HTTP? Everything about LV in the docs mentions it's pretty much all-in with websockets.
Then there's progressive enhancement too as another difference. Turbo is set up out of the box to use controllers which means you really only need to add a tiny amount of code (2-3 lines) to handle the enhanced experience alongside your non-enhanced experience. For example you could have a destroy action remove an item from the dom when enhanced using Turbo Stream or just redirect back to the index page (or whatever) for the non-enhanced version.
But with LV wouldn't you need to create both a LV and a regular controller? That's a huge amount of code duplication.
Although to be fair I would imagine most apps would require JavaScript to function so that one is kind of a non-issue for most apps, but it's still more true to the web to support progressive enhancement and the easier you can do this the better.
It uses long polling over http. To be clear it's not restful http, but it's not websockets. I believe that Chris doesn't believe it's important for most people so there are no directions right now. Could be wrong there, I'm not Chris.
Page changes are still initiated by the client in LiveView (although can be server initiated)
LiveView is just channels under the hood. Once you consider that, long polling may seem more obvious
Since LiveView is built on phoenix channels, it's the same story. Simply pass the `transport: LongPoll` option to the LiveSocket constructor and you're now using long polling with LV :)
> LiveView is just channels under the hood. Once you consider that, long polling may seem more obvious
It's not obvious to me for user invoked page transitions because when I think of long polling, I think of an automated time based mechanism that's responsible for making the request, not the user. But a page transition is invoked by the user at an undetermined amount of time / interval (it might happen 2 seconds after they load the page or 10 minutes).
Your idea of "long polling" sounds more like periodic polling (repeated requests within a frequency), though that's not what long polling is or how it works.
> Your idea of "long polling" sounds more like periodic polling (repeated requests within a frequency), though that's not what long polling is or how it works.
Right isn't long polling keeping the connection open for every client on the server and then the server is doing interval based loops in the background until it gets a request from the client or times out?
It wouldn't be doing a separate HTTP request every N seconds with setInverval like the "other" type of polling but it's still doing quite a bit of work on the server.
In either case, LV's long polling is much different than keeping no connection open, no state on the server and only intercepting link clicks as they happen for Turbo Drive and Turbo Frames.
I don't think that's necessarily a big deal with Elixir (I wouldn't not pick it due to this alone, etc.), but this thread is more about the differences between Hotwire and LV, and no state being saved on the server for each connection is a pretty big difference for the Drive and Frames aspects of Turbo.
> But with LV wouldn't you need to create both a LV and a regular controller? That's a huge amount of code duplication.
You just do LiveView instead of a regular controller. No duplication.
When you request a page, it is render on the server and all of the HTML is returned over HTTP as usual.
After the client has received the HTML updates, live updates can go over a websocket. For instance you start typing in a search field, this is sent to the server over websockets. Then the server might have a template for that page that adds search suggestions in a list under that search field. The server basically automatically figures out how the page should be rendered on the server side with the suggestions showing. By "re-rendering" the template with changed data used with the server side template. Then it sends back a diff to the client over websockets. The diff adds/changes the search suggestions to the page. The diff is very small and it's all very fast.
> You just do LiveView instead of a regular controller. No duplication.
Yes but this is only in the happy case when the client is fully enhanced no?
What happens if you hook up a phx click event to increment a like counter.
After the page is loaded, if you click the + to increment it while having JavaScript disabled it's not going to do anything right?
But with Hotwire Turbo, if you have a degraded client with no JS, clicking the + would result in a full page reload and the increment would still happen. That's progressive enhancement. It works because it's a regular link and if no JS intercepts it, it continues through as a normal HTTP request.
I've found that the vast majority of clicky stuff I do leads to a URL change anyways, and these are just proper links to the new URL that LV then intercepts.
In your Counter example, it's true that for the 'degraded' version to work, the link would have to be a proper link and not a phx-click. But in the (IMO very unlikely) case where this fallback is necessary, solving it with a proper link/route does not require duplication, just a different approach.
What you would do is create a LiveView that handles both the initial page and the 'increment' route. If LV is 'on', it intercepts the click and patches the page. if LV is 'off', your browser would request the 'increment' route, and the same LV would handle all this server-side and still display the new, incremented counter.
The LV is both the server-side controller /and/ the client-side logic. That's part of what makes it so appealing, but, admittedly, also something that can take a while to wrap your head around.
I've more than once reflexively gone for phx-click solutions where the LV would receive the event and 'do' something, only to later realize that it would be much better to use a proper url/routing solution (where LV is still the 'controller'). In hindsight it's often a case of treating LiveView too much like just 'React on the server', basically.
Yes, the `phx-click` doesn't automatically get translated to a link or form submission. You can still design the page to work without javascript. For instance by having a "+" button be a normal form or link and then have phx-click intercept it when javascript is enabled. This can be done with one LiveView module without having to also have a separate regular controller.
One way to do it would be to in the `mount` function handle normal non-javascript params being sent and a `handle_event` function handle `phx-click`.
I don't know if there is already a way to have `phx-click` with fallback to HTTP in a less "manual" way. It should be possible to make.
I've just been getting into it, and am completely loving it, especially the elixir part. It feels like the whole OTP/erlang part of it (basically single codebase microservices and patterns that come with it) has proper engineering and principles behind it, and it's something I've been missing for a long time in our profession
Off topic from the main thread, but Travel map looks really good. Reminds me of all the travel blogs I used to read when I was in high school / college dreaming about getting out and exploring the world. Great work!
This kind-of tangential reply is in bad taste, but DHH is currently being an asshole about this very topic on Twitter, so:
This seems cool, but the progress that HEY has made since launch isn't very impressive if that's the flagship example.
HEY's search UX, which uses these capabilities, has been abysmal since the day it launched. It's a much better experience if you disable JavaScript/Hotwire and fall back to the server-side-rendering instead of the hybrid mess they push down. I'm very disappointed that this hasn't been improved. (I sent feedback about this while they were still in their invite-only phase.)
We were also promised custom domain support by the end of the year, and we don't have a timeline or even pricing yet. (That's obviously backend and unrelated to this announcement.) I don't want to go back to Google, but the lack of improvement isn't giving me a lot of confidence as a HEY customer, so I'm considering alternatives.
I‘d be curious to hear about your general experience w/ hey - from the outside it looks like it has some cool new(er) approaches, but I‘ve seen someone‘s actual full email account in a screencast once and it looked like it‘d get messy with actual heavy usage really quickly and the UX might deteriorate. Any takes?
Not OP, but I have a Hey account and I really enjoy it. The screener feature is worth the price alone IMO. The biggest challenge for me was having to re-conceptualize email after switching over since Hey doesn't use traditional email terminology (i.e. "paper trail" vs "archive").
The amount of inbound email I get has dropped immensely due to the screener and my inbox ("imbox") is so much easier to manage with Hey, IMO.
I find HEY's general workflow very good for my medium-volume personal email. It encouraged me to filter a lot more stuff out of my inbox than I did with Gmail, while spending less time configuring filters. The ability to leave internal comments on external email threads (business accounts only) is very useful as a first-class feature, instead of relying on fiddly BCC. The apps are quite nice to use. Overall, there's less clutter and it is a more pleasant experience than other email clients I've used... until I try to search in the Hotwire-powered web view.
This is the Ruby on Rails version of what Elixir Phoenix Live View and .NET Blazor do.
For those not familiar, rather than using a standard web framework where a lot of processing is done client side, these frameworks allow html buttons etc to call native Ruby, C#, or Elixir functions on the server rather than using some sort of post/get request to do that. Every UI interaction goes over the wire, which is where performance can be hurt.
Having used Blazor in production for nearly a year, it speeds up development, but your server is doing a lot more processing and will not be able to handle an equivalent amount of users that a normal web app would because it’s keeping a copy of the clients DOM in memory. It’s updates the dom on the server then streams that update through websockets to the clients dom, creating your magic Single Page App experience.
I think it’s cool tech, but not necessarily scalable due to the increased server load. I’d be interested to hear from people using it to serve large orders of magnitude of clients at once.
Agreed. My comment about scaling is simply that a lot of other companies use rails at scale, and alike that of hey.com (which I presume is heavily used) it is probably built to... scale
Well Rails was never not scalable in literal sense. It was just the cost of scaling being expensive; comparatively speaking when people say Rails not able to scale. And for most if not allSaaS that should never be much of a problem. Because you are getting revenue per user, and generally speaking the cost percentage of user / app / server resources is so small, that is a rounding error in grand scheme of things.
What doesn't work quite as well is when the app is operating a freemium / ad based model. Where a large volume of traffic are required before you start generating revenue.
It depends right? If your app is a dashboard this could be less work when using smart cashing. You could cache entire html fragments on the server for specific user defined filter values. If you are serving hundreds of users you’d have dashboard faster than most BI tools out there including PowerBI.
I get the rush to provide technical comparisons to something that was just revealed five minutes ago, but none of what you just said is actually how Hotwire or Turbo works. There's no client DOM in memory on the server, there's no html buttons that call native code.
There are forms being submitted, there are normal requests happening, there are templates being rendered on a per-request basis (just like a full page load).
This is like a normal web application that renders HTML, from the perspective of how the server works. Just as scalable as every other Rails application that renders HTML. Be that GitHub or Shopify or Zendesk or Basecamp or HEY or any of the many, many other apps that have long ago definitively proven that Rails Scales.
Although I do find the commentary that the current #1 comment on the HN thread is literally a "bUt DoES iT SCaLE??" take, based on a misunderstanding of how this works. All is indeed as it's always been
Loving the tech and the website (and Rails, and Basecamp, etc). However it looks very similar to StimulusReflex (https://stimulusreflex.com), which has already attracted a few users.
Can you highlight the biggest differences between the two?
My first thought was that some dude is ripping off LiveWire and Turbolinks because of the strange centered text (no professional web designer would do that), until I saw that those were renamed evolutions of those technologies from Basecamp, cool!.
Gotta give @dhh credit - he is one of the world's great marketers/promoters. He's been building hype for this as the "next Rails" for months and the man knows how to pique an audience's interest.
I've never seen one of these "logic in html-attributes" systems take error checking seriously. In stimulus they start to mention it in "Designing For Resilience" (though only for feature-checking), but in "Working With External Resources" where it uses calls network/IO bound calls they never mention how to handle errors or if the framework just leaves it up to you. Stimulus is also where you need to write your own js code, so I guess you could handle it yourself but in turbo when I skimmed the handbook I find no mentions of what/how to handle errors (or even what happens when turbo gets one), and when loading stuff over the network that is pretty much crucial.
From the turbo handbook: "An application visit always issues a network request. When the response arrives, Turbo Drive renders its HTML and completes the visit." Using the phrase "When the response arrives" begs the question of what happens if it doesn't arrive, or if it takes a minute for it to arrive, or if it arrives but with a faulty status code.
A very good point! Presumably the appeal of a system like this is the potential for graceful degradation where if sockets aren’t working or some requests are failing then the default html behavior should still work: links will just take you to the original destination, but there’s no indication that this is actually what happens.
This is an isomorphic fetch. The original href already is the visited URL, so I'm not sure that trying that again is wise, or appropriate, unless the user chooses to reload.
The entire design philosophy here is to mimic apparent browser behaviour, or to delegate to it. Hence, to GP's question; you should expect the appearance of browser-like behaviour in any circumstance, modulo anything Turbo is specifically trying to do different. Deviation from baseline browser semantics was certainly a basis for filing bugs in its predecessor (Turbolinks).
As for what Turbo actually does, I checked the source. Good news, even for a first beta, they're not the cowboy nitwits alleged; it gracefully handles & distinguishes between broken visits and error-coded but otherwise normal content responses, and the state machine has a full set of hooks, incl. back to other JS/workers, busy-state element selectors, and the handy CSS progress bar carries over from Turbolinks.
I’ve used intercooler with browser-side routing and, the strategy for error recovery that makes sense in that context is “if something goes wrong, reload the page”: the server is designed to be able render the whole page or arbitrary subsets and, so, reloading should usually be safe.
...also what is when response/request "items" are not handled chronologically due to load. I once wrote a real-time application with that pattern (HTML over AJAX). It worked but it was not enjoyable at all. Also literally every larger feature change would break the code because you had all these weird corner-cases.
Counterpoint: is there any error handling in the majority of SPAs today? From my experience, SPAs can crap out in all kinds of interesting ways when the underlying network connection is flaky and I often end up stuck on some kind of spinner that will never complete (nor give me a way to abort & retry the operation when I already know it won't complete and don't want to wait for the ~30-second timeout, if there is a timeout even).
Not saying this is better from an error handling perspective, but at least the whole idea of Hotwire and its peers (Turbolinks, etc) is that there is no state and it should thus be safer and quicker to reload the page should things go wrong.
Also the app looking fine immediately after refresh (when it's been server-side rendered), then crashes a second later when the JS framework hydrates the HTML and hits a client-side bug.
I agree that most SPA apps do it badly too, but hiding the opportunity to do it well certainly does not help.
> there is no state and it should thus be safer and quicker to reload the page should things go wrong.
That's not exactly true since there are non-idempotent HTTP methods and while the browser will prompt you if you want to resend a non-idempotent HTTP request when refreshing a normal form POST I don't think that turbo/turbolinks/similar will allow you to prompt or resend.
On refresh should turbo retry a POST? The "right way" is to keep the state of the last POST and prompt the user for confirmation, but it seems like it is undocumented as to what it does. I'm guessing it either does not retry or it retries and hopes effect will be idempotent.
No one (SPAs, traditional webpages and "spiced" webpages like this included) is doing everything right, but my objection to this framework is that it seems to try to say things are simple or easy when they clearly aren't.
I believe the only place you'd use a POST with Turbolinks is in response to an explicit user action like pressing a button. In this case, if it fails, you'd refresh the root page (which embeds the button) at which point the state of that page would reflect whatever the server has, so it would display the new data or may not even have the button anymore if the initial POST actually did make it to the server.
You're correct in that the only standards-based way to retain a POST in the session history is to not disturb an existing entry. However:
> it seems to try to say things are simple or easy
That's an unfair mis-characterisation. The developers are not pitching a universal panacea that solves all your problems and handles every edge case. They are offering an architecture that simplifies many common scenarios, and one that is thoroughly developer-friendly when it comes to supplying observability and integration hooks for edge cases.
For this latter purpose it merely remains to bother with reading the (clean & elegant) source code to enlighten oneself.
> it seems like it is undocumented
On the contrary, the behavior w.r.t full-page replacement on non-idempotent verbs is extensively discussed in the Turbolinks repo.
The "Turbo Drive" component appears to me as essentially unchanged behaviour in Turbo 7.0.0beta1 from Turbolinks versions 5.x. Turbolinks was introduced in 2013, has many years of pedigree and online discussion, and is well understood by a large developer community. Turbolinks was always maintained, even being ported to TypeScript (from the now venerable CoffeeScript) ca. two years ago with no change in behaviour. Turbo Drive is, practically, just a slightly refactored rebrand of the TypeScript port.
The stuff everyone is so excited about are Turbo Frames and Turbo Streams. These are new, and may be used without adopting Turbo Drive: as with practically everything from Basecamp, the toolkit is omakase with substitutions. They are, nevertheless, complementary, so you get all three steak knives in one box.
Which I find frustrating because literally the only reason I find compelling for making an SPA in the first place is to deal with flaky networking situations.
If I know the network is always there, why bother.
Facebook does this constantly for me. It's a crapshoot whether I'll be able to open notifications or messages without a couple of refreshes, or if I'll just get the fake empty circle loading UI indefinitely until I hit F5.
I was trying to quit Reddit for years and their intentional breaking of mobile web (as well as making email address required even for already existing accounts) is what finally enabled me to.
Of course now I just go on Hacker News and Twitter instead.
in general, the right approach in HTML-oriented, declarative libraries appears to be triggering error events and allowing the client to handle them, since it is too hard to generalize what they would want
I thought it was a joke. and completely ignored it when i first saw the headline. only later someone else referred to DHH's hotwire. it clicked me that it is the New Magic thing.
I am still wondering if there is any benchmark presenting some objective facts
Is this the "new magic" that DHH teased on Twitter, or are we still waiting to see what that is? Also, should we expect this to be included in a future version of Rails?
My first impression is that the API seems a little convoluted, but that might just be me. In an ideal world, I'd love to just build components (similar to React components) on the server side, and let the framework intelligently handle synchronizing state over the network (Phoenix LiveView seems closer to that), but this feels like it involves a lot more framework-specific logic (and markup) with special cases - stuff that's likely to change as the framework evolves, so I'm not sure how I feel about it.
I guess it makes sense in terms of making this easier to add to an existing rails app incrementally (since it's more-or-less "opt-in" for each model, view, and controller), but if I'm building a new project with this and want it on everything, it feels like it will necessitate a lot of code duplication. Either that, or I use it judiciously only when absolutely needed (and use traditional rails behavior for everything else), but that feels like a mess of two very different approaches bundled together in one codebase.
I use Turbolinks for some Django projects, so took a look at how much work it would be to port these over to Hotwire/Turbo. Very little in the way of docs to go on right now and what little there is in the way of examples is buried in Rails gems. Should be mostly straightforward, with a thin Django middleware to check headers and the like. I guess The Turbo Stream stuff could be adapted to Django channels quite easily. I'll wait and see how and when they improve the documentation, especially on the Rails-agnostic front.
Assuming that the counter state is on the client. A button with an event listener that runs x.innerHTML = state.counter++ makes a lot more sense than a round trip to the server.
Just looked and Turbo is 135kb. I rewrote this website (http://40.115.126.159/) with my own JS framework and it comes at 15kb, that's 9x smaller Turbo...
In that case you could avoid making the round trip completely by just adding some native JS. There's nothing in Stimulus or Hotwire that prevents that.
I think they actually mention adding Stimulus controllers in the demo video for those usecases.
A while ago, way before SPA were cool and even AngularJS was around, I had done something sort of like this, based on Ajax calls though, so the underlying tech was definitely different.
While this is truly clever and being based on sockets does seem to simplify communications between client and server, I'm not fully convinced.
Currently I see client side rendering as a way to delegate some computing to the clients. Which is a nice way to lower operating costs.
I can see the benefit of this and the reasoning behind it. Because SPA still need a server (serverless still implies server side coding) as long as you are executing code on the client you need two codebases. One for the client, one server side.
With this solution it would seem that it's possible to serve a web app with only one codebase, so that's definitely a plus. But then performance and costs do preoccupy me a little.
Perhaps the ideal scenario for this tech would be for (enterprise) apps to be deployed on proprietary servers, where scale is a minor concern.
The one performance concern on distributed systems is on data coherence. You optimize it, and treat anything else as relatively free.
Rendering (or anything else that you can choose to do on the server or the client) has no impact on data coherence. Thus, as optimization goes, it's something you care about less than nearly anything else.
By the way, I have seen my share of really horrendous code though my life. Only twice I have seen something that isn't data coherence being a bottleneck to anything. Both times were really stupid bugs.
If you're used to SPA-style frontend development with React, Vue, or any similar framework, then yes, this will probably be a big mental/development shift.
Turbolinks (the project at Basecamp that Hotwire grew out of, now it seems it's called Turbo Drive) was a way to bring single-page style load times to traditional, server-rendered apps. Hotwire is the evolution of that: Turbo Frames let you dynamically replace certain parts of the page on the frontend, rather than having to throw the baby out with the bathwater on every page action. If you're used to developing server-rendered applications without much JavaScript using Ruby on Rails, Django, or a similar "batteries included" backend framework, then you'll be able to add a more dynamic feel to your web app without much of a mental shift: certain template partials (basically the components of your frontend) will be wrapped in this `turbo-frame` HTML tag, which will be slotted in to your page dynamically by Turbo.
I agree. This seems complex but I am NOT a rails developer. The least complex stack I can find is Svelte (with Sapper) on the front end that calls out to API made in Spring Java.
That lets me have quick development on the frontend, server side rendering for fast first page load and use of a typed language for the backend. The client is also overburdened with a large JS framework.
More complicated than having to deal with auth, routing, validation, xhr, application state, business logic in an SPA? And write your endpoints again with another round of routing, auth, validation, business logic. And then integrating everything.
I've used Turbolinks/Stimulus with Django. I can't speak to the new hotwire stuff, other than the relatviely minor Stimulus2 changes.
Honestly it's pretty straightforward and the API is tiny. It's great for that grey area between serving server-loaded static pages and full-blown SPA - for example CRUD apps (and lots of things are pretty much CRUD apps). Turbolinks gives you a "poor man's SPA" for little effort (albeit with some gotchas) and Stimulus is fine, although it gets tedious for when you have to work with a lot of DOM elements e.g. "when I click the Subscribe button, change it to an Unsubscribe button". Maybe a less imperative framework like AlpineJS might be simpler.
Why wouldn't it? It's got like 2 pictures and isn't long. I also wouldn't call it wicked fast - this is normal speed, which everyone seems to have forgotten
Sorry, this is right. You have all forgotten about how fast the normal web is or are too young to have experienced it. Sub 100ms is normal. 1000ms is not normal.
49kB, and for the most part it's not executing, so what's your point? There is no reason that shouldn't be fast, regardless
Not trying to stray too far from my original point, loading a small page with not alot going on should be fast. I'm not knocking Hey.com, I am just not going to pass out accolades for making a properly usable site!
Well you've still not explained how it's somehow faster or smaller using this technology. Simply saying look at size is meaningless, how much is actually being shaved off by using hotwire?
Spoiler, it's just Ajax but it pushes the data through your templates before sending it to the client. We were doing this literally over a decade ago in the early days of XHR.
Look at any of Wicket 1.4's Ajax examples (bonus if you use a better JVM language to reduce the boilerplate - I was using Scala). It's a great technique, it works great, I'm just slightly salty that the industry felt the need to switch to JS SPAs for, as far as I could see, very little tangible advantage.
I have a small website and I have one page with an order form and I show a modal with the result of the order. This is how I do it. The Ajax calls gets back HTML that it shoves into the modal. It was just easier than writing JS with templates and parsing JSON. It always felt icky to me because that's not the way you're "supposed" to do it, but it works quite well.
A client of mine has an entire internal Line of Business application built on that mechanic: backend returns javascript to shove HTML somewhere. The trick is that the js is built automatically based on components. Devs rarely have to touch js.
It's lightweight on the frontend and a pleasure to develop for since forms are entirely coded on the backend with components. On rare exceptions more complex pages have some sprinkled lodash.js.
They have around 400 CRUD pages with complex business validations. Up to 1.5k concurrent users. At that point MySQL starts to sweat a bit and p95 increases above 200ms. All running on 2 machines: a nginx+phpfpm and a MariaDB.
The instant no-compilation feedback loop of edit-save-refresh is orgasmic. That system opened my eyes to a lot of preconceptions and buzzwords I once held sacred.
'HTML over the wire' is an unfortunate tagline, but it does actually seem interesting, I'd suggest looking at the 'Turbo' docs before reacting.
I have thought in the past I wish I could have a frameworky component-style frontend, but where the component is HTML rendered by my rust (or whatever) backend. This would seem to get me a lot closer to that.
Getting turbo is trivial on any backend. Just include the js.
Stimulus is also not any different from any other javascript you'll write. There's no need for any kind of scaffolding from the backend stack except for some features - and those aren't particularly hard to implement either.
Source: Using Turbolinks & Stimulus js with Django.
Thanks, I need to read up more on it. When I look at the rails code, I see a lot of nice convenience capabilities in the template definitions and rendering.
The downside of this is very high coupling between the frontend and the backend. However, many companies have been successful with strong coupling to Rails so maybe it's not a problem.
Between what? Your frontend is serving the html/css/js, that's your frontend. You can have another background service behind it, as many layers as you want. This separation doesn't have to be at the http level.
Let's say you take another approach. Build a backend that serves REST, GraphQL or whatever protocol. This can be a Rails backend, could be Python, JS, whatever. Now you or the frontend team are free to build the frontend with whatever technology.
You can update it more freely to shining new JS frameworks because it is not entirely tied to your backend.
Also, if your client/boss asks you to build a native IOS frontend, you already have built an API, useful no?
> You can update it more freely to shining new JS frameworks because it is not entirely tied to your backend.
With proper architecture, you would have stores & services as the main boundary of your app. Then you can easily consume them in your server-side rendered pages or JSON API endpoints, which are then just thin wrappers.
>Also, if your client/boss asks you to build a native IOS frontend, you already have built an API, useful no?
My client wanted email forms to improve engagement. And it was not a theoretical request.
In many one-off applications there's no real need to create a separate API, tight coupling lets you focus on an MVP without having to focus on abstracting out an intermediate model -- you could go from the SQL database right to HTML rendering.
If you have a frontend that is entirely dependent on data from the backend to display something to the user, then you've already got high coupling between the two.
You probably wouldn't use this for a drawing app. But if you've got something where you're showing/updating database data, this model is likely fine.
I love that these new techniques come out to make web dev simple again. I however think Phoenix Liveview is still the best from all the alternatives I have seen so far and that is mainly because Elixir as a language seems to be very stable and suited for having a lot of websocket connections open at the same time.
It seems easier to accomplish this with Liveview and perhaps Alpine.js for displaying stuff like modals etc. It's a really good way to build applications I think for places that need a server roundtrip anyway which is most of the stuff in a modern app.
I write a SPA app for work in javascript and about 90% of the issues that occurs are mainly state issues where the state differs or is in a bad state for some reason. State management is very hard to do right and I think solutions like these simplyfies it a lot!
I urge people to check out Elixir and Phoenix Liveview which I think is the "original gangsta" when it comes to sending small updates over websockets and if you try it I will guarantee you will be blown away of the speed of which you can develop web apps as a solo dev.
I've been using Phoenix LiveView and I love it. I'm much more comfortable with Erlang syntax, though, and wish there was still a viable Erlang web framework.
tldr content marketing for Hey (email web app), carries DHH signature + cargo cult + magic, many social media bots/DHH fanboys upvoting comments + a second submission + silencing other opinions
Edit: As I predicted, I've got my downvotes by DHH's social media team haha.
amazing. This used to be called /cgi-bin/ with templates and XMLHttpRequest (a Microsoft invention initially supported only in IE then adopted by everyone) and now it's suddenly a new technology/approach but with a new name so you're not ridiculed for using it?
I gotta give them credit for revisiting server-side page assembling though; moving application logic and templating mostly to the front end was a HUGE mistake. JS is very fast now, but it's still absolutely glacial in its speed compared to server-side code written in compiled languages.
In old school chat rooms, this was usually done by polling for new messages every X seconds, or doing a refresh / reload of an IFrame containing the chat.
Long Polling is how we used to do it. The web page opens a request to the server but the server does not reply. Once the event happens on the server, the server replies to the old request and the client opens a new request.
I think in Netscape 4 you could also make a request in an iframe which the server responded to with Content-Type: application/javascript and Transfer-Encoding: chunked. Then every time the server got an event it would write one chunk of javascript, which the client would execute immediately. This script could then call a callback in the parent frame.
Yes, but I was replying to someone who said that we got all of this with cgi-bin. That’s like saying, “I can code a modern app in assembly, therefore there’s been no progress here”.
I don't think I've ever seen DHH or the Basecamp crew advertise any of this type of stuff as some groundbreaking new approach that nobody has ever tried before. On the contrary, I usually see them talking about how those old ways of doing web development were actually better in many ways, which is why they're pushing to move away from the current "everything is an SPA" approach.
And all this new tooling makes the experience much better than it ever was in the past.
>I don't think I've ever seen DHH or the Basecamp crew advertise any of this type of stuff as some groundbreaking new approach that nobody has ever tried before
Not disagreeing with the rest of the stuff. But they literally call it "NEW MAGIC" per DHH's tweet.
This is so exciting to see, especially for older folk like me.
Almost 20 years ago, one of my professors told us before graduation that hot tech is mostly about the idea pendulum swinging back and forth. I immediately chalked it up to 65+ above white wise men snobbery.
However, this is exactly that. We started with static pages, then came Ajax and Asp.net and the open source variants, then we went full SPA, now we are moving back to server side because things are too complicated.
Obviously tech is different, better, more efficient but the overall idea seems to be the same.
It's specifically built for Rails, so yeah, it definitely assumes full-stack control.
And there are definitely applications I would prefer to write as an SPA over the Hotwire approach. But given that the vast majority of websites are just a series of simple forms, I prefer this approach over the costs you incur from building an entire complex SPA.
It's not just that things are too complicated... the JS being sent to browsers is large and a lot of work. That requires more bandwidth, processing, and power usage on client devices. This eats phone, tablet, and laptop batteries.
> But... that's one of the pro's of not having to do the rending cycle on the server. Also caching of framework libraries off CDN's and such.
This doesn't save battery life on a device. If someone downloads a few meg of JS their browser has to parse and execute that JS locally. This use of processing uses power. If that same person had half as much JS to parse and execute it would use less power.
A CDN does not save from this happening.
When power use happens on a server it's more on the server but less on devices with batteries. Batteries aren't used up as quickly (both between recharges and in their overall life).
A server side setup can cache and even use a CDN to only need to render parts that change.
My points are that it's not all cut and dry along with considering batteries.
Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.
> Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.
This matters because of the poor, the elderly (on a fixed income), and those who aren't in first world countries don't have easy access to money to keep getting newer computers.
Then there is the environmental impact of tossing all those old computers.
So, there is both a people and environment impact.
I think there's some kind of weird mentality among web devs that client-size computations are free, but server-side ones cost resources because you do more of them the more users you have.
That’s not so weird. It’s like IKEA shipping you disassembled furniture: they don’t have to pay for assembly (nor for shipping as much air). The client bears the cost of assembly, so if you don’t pay the client’s costs, it’s free.
The two things that use the most battery in a phone are the radio and the screen.
If you can do most of the work client side, the phone can turn off the radio and save battery. The amount of battery savings of course depends greatly on what the application is actually doing.
An interesting development is that the argument "common libraries will be cached in the browser" is no longer true. Chrome and other browsers are starting to scope their caches by domain, to mitigate tracking techniques that used 304 request timing to identify if the client had visited arbitrary URLs.
Yes, I'm aware that "it will be cached" lost most of its glory when bundling became mainstream, but I still hear it as an argument when pulling things from common CDNs.
In a few tests I ran, I found rendering to be fast and lightweight. If you already have prepared the associative array of values, then the final stage of combining it with a template and producing HTML doesn't strain the server, and so it doesn't help your server much to move that part to the client.
The server's hardest work is usually in the database: scanning through thousands of rows to find the few that you need, joining them with rows from other tables, perhaps some calculations to aggregate some values (sum, average, count, etc.). The database is often the bottleneck. That isn't to say I advocate NoSQL or some exotic architecture. For many apps, the solution is spending more time on your database (indexes, trying different ways to join things, making sure you're filtering things thoroughly with where-clauses, mundane stuff like that). A lot of seasoned programmers are still noobs with SQL.
Anyway, if rendering is lightweight, then why does it bog down web browsers when you move it there? I don't think it does. If all you did was ship the JSON and render it with something like Handlebars, I think the browser would be fine, and it would be hard to tell the difference between it and server-side rendering.
I think what causes apps to get slow is when you not only render on the client but implement a single-page application. (It's possible to have client-side rendering in a multipage application, where each new page requires a server roundtrip. I just don't hear about it very much.) Even client-side routing need not bog down the browser. I've tested it with native JavaScript, using the History API, and it is still snappy.
I guess what it is, is that the developers keep wanting to bring in more bells and whistles (which is understandable) especially when they find some spiffy library that makes it easier (which is also understandable). But after you have included a few libraries, things start to get heavy. Things also start to interact in complex ways, causing flakiness. If done well, client-side code can be snappy. But a highly interactive application gets complicated quickly, faster than I think most programmers anticipate. Through careful thought and lots of revision, the chaos can be tamed. But often programmers don't spend the time needed, either because they find it tedious or because their bosses don't allot the time --- instead always prodding them on to the next feature.
Probably because they are relating an anecdote from their past, and self-deprecatingly pointing out how naïve and overly-judgemental they were _back then_.
Youth are always writing off the oldies - I did it, and now that I am old, I see it happening to me - and that is ok - we need that passion to shake things up, even if they end-up eerily similar to the way things were done before...
I would point out that those are older also tend to write off the younger. I think it's just perspective mismatch; If I can emulate another person's perspective in my head, I can anticipate their decisions (and reasoning), so I can decide if they are being reasonable.
However if I can't understand their perspective, I have a very hard time in understanding and judging their reasonableness (because I'm basing my judgement solely off of my own experiences and memories that are similar to their circumstances).
This lack of understanding translates to seeing a lack of credibility in them. "Maybe if they were more like me, they'd make more sense, be more reasonable". This type of thinking is common in most types of prejudice.
It's why young people write off older people: "They're too older to remember what it's like being my age, or to understand how things are now".
Why the opposite occurs: "They're still too young to understand how life works yet".
Why people of very different cultures tend to be prejudiced: "Their kind are ignorant of how the world works", and the opposite: "They've never been through what I've been through, they don't understand me or mine".
All of these statements evaluate down to: "If they were more like me, they would be reasonable". Which is of course true, if "they" were more like "you", their systems of reasoning and value be more similar to yours, and vice versa.
In computer science, it's particularly tempting for the yutes to write off the oldies, because technologies change so rapidly — I sometimes frighten the kids by mentioning that I got my first degree before the WWW was invented, and I'm far from retirement age.
Lol so did I. Ageism is a thing, and it's everywhere. At least when you're young, you don't have the excuse of already having been in the other age class. That being said, several of my older professors were entirely full of snobby shit. The older I get, the more I see how they were not trying to impart knowledge, but to gain some kind of status as "hard-ass" old men with the younger generation.
I’m glad this technique is making a comeback. The last 10 years of JavaScript on the client have been an utter shit show that left me wondering wtf people were thinking.
You have the benefit of hindsight at this time. You can draw parallel to history of flight and all the crazy contraptions that people attempted. Great technology can emerge from the combination of numerous shit shows. The whole is greater than the sum of the parts.
I'm not a front end engineer, but it always seemed crazy to me. I remember testing out the Google Web Toolkit when it came out more than a decade ago, and the craziest thing about it to me wasn't the Java --> JavaScript compilation, it was that the server just dumped an empty page and filled everything in with JavaScript on the client.
Then, remember the awful awful #! URLs? Atrocious, and seemed like obviously a terrible idea from the start, yet they spread, and have mostly died, thankfully. But even with the lessons from these bad tech designs, new frameworks come out that repeat mistakes, yet get incredible hype.
AWS used to rely on JWT for their consoles (haven't for a few years now, most folks migrated away some 5 years ago)
It's why they used to be horrendously bloated with large javascript bundles that took so long to process on the client side.
Roughly speaking the idea was "We don't have any Javascript developers, but we do have Java developers. JWT allows us to bridge that divide". Neat in theory, and an understandable decision, but diabolical in practice!
Hashbangs preclude even the option of the server even seeing the state that the client wants from the first request, necessitating severa hops. PushState, though it allows URL transitions without a full reload, is an entirely better and different idea.
Around the time that GWT came out, offshoring was a big thing. And most of the contractors only knew Java. Also Java was the trusted language and javascript was not.
The only big GWT project I've ever been on was a governmental project that I won't go into (because it's Danish and I would have to describe a bunch of stuff that everyone in Denmark knows and nobody outside would probably care about), but the company providing it was porting their Java version to JavaScript and had a significantly large codebase to leverage.
People have been pointing out it's a shit show with no end in sight for the entire duration of the phase. Pointing out the performance impact and cost to end users, how diabolical it is for those on lower latency or poorer network connectivity (i.e. most of the world), and so on.
Same thing as always happens with these pendulum swings, newer engineers come in convinced everyone before them is an idiot, are capable of building their new thing and hyping it up such that other newer engineers are sold on it while the "old guard" effectively says "please listen to me, there's good reasons why we don't do it this way" and get ignored. Worse, they'll get told they're wrong, only to be proven right all along.
I'm not denying there are obstructionist greybeard types that just refuse to acknowledge merits in new approaches, but any and all critique is written off as being cut from the same cloth.
It's perfectly possible to iterate on new ideas and approaches while not throwing away what we've spent decades learning ('Those who do not learn history are doomed to repeat it'), but tech just seems especially determined not to grow up.
I guess I've become a grey beard. I've done the whole journey from CGI everything to a bit of js to SPA. As much as I'd really like to be nostalgic about the good old days, there are reasons everything got pushed into the client. One of those reasons is maintaining state.
"HTML over the wire" isn't really a return to the good ol' days. It's still the client maintaining state and using tons of js to move data back and forth without page reloads. It just changes the nature of 1/2 the data and moves the burden of templating back to the server.
It is amusing that they make a claim that reads a lot like "eliminate 80% of your Javascript and replace it with Stimulus". What is Stimulus? A Javascript framework.
Very true. I think that mostly, web dev mainstream has taken a rational path. It’s with the benefit of hindsight as you say, or the yoke of unusual requirements, that people now say ”we did it all wrong”.
The node runtime is actually pretty quick (It's roughly as fast as an optimizing compiler from 10-15 years ago, give or take, which is fairly impressive), but even TS is still makeup on a wart - you can't escape JS's more bizarre semantics that easily.
I think it's less about loving JS so much, and more about not having any options on the client. If there were any better client options they might have won!
Yeah people shouldn't make applications in programming languages. If your application can't be made with html/css then it's bloatware. All this java, .net and C are totally unnecessary.
They are, look at the context of what they're saying. JS is simply a programming language in a VM like plenty of others, there's nothing inherently bad about it. But every thread here's the uneducated hate for it, completely misunderstanding that html/css static pages don't solve the problems a programming language does.
I'd like to see these people make applications in pure XML. No programming.
No, the argument has never been "replace all JS with static HTML/CSS". The argument is "JavaScript frontends are becoming unnecessarily bloated, slow, and complicated, and we can do better". Solutions like the one Basecamp is proposing with Hotwire include pushing as much rendering logic as possible to the server, where you're using a language like Ruby for logic. Nobody thinks you can just remove all logic from a web application unless it's literally just static content.
And even with Hotwire, you're not getting rid of JavaScript entirely. You can write it with Stimulus. The idea is just that frontend web development has become a mess, and it's possible to simplify things.
“Bloated.” This is actually the opposite. Go into a language like python and install numpy (30mb) and then come back to complain to me about a 2mb js bundle.
This argument is so bogus if you look at alternative language dependency sizes.
While there are issues within Python side, I think it is quite unfair to use numpy as an example.
My reasoning is that the numpy project is meant for scientific and prototyping purposes, but many times people are using it as an shortcut and include the whole thing into their project.
That being said, the quality in these packages does vary depending on who developed them. But I think this is a problem that exists with all languages where publishing packages is relatively straightforward.
The bad experiences stick out to people, whereas all the well behaved JS-heavy apps out there likely don't even register as such to most people.
Even with SPAs, it's very possible (and really not that hard) to make them behave well. Logically, even a large SPA should use less overall data than a comparable server-rendered app over time. A JS bundle is a bigger initial hit but will be cached, leaving only the data and any chunks you don't have cached yet to come over the wire as you navigate around. A server-rendered app needs to transmit the entire HTML document on every single page load.
Of course, when you see things like the newer React-based reddit, which chugs on any hardware I have to throw at it, I can sort of see where people's complaints come from.
I don’t think the problem is with programming languages, but with JavaScript specifically, since it was never designed to be stretched this far. TypeScript is an improvement, but if you could write C or whatever on the client side and run it as easily as JS I think more people would go that route.
C/C++ and Rust actually run very fine client side nowadays. However, it's just more cumbersome to make UI in those languages, so it's mostly used to port existing libs or for performance.
Indeed, this is exactly how web chats worked in late 1990s, except for the use of WebSocket (they used infinite load instead). They even seem to revive frames, another staple of 1990s design!
The greatest trick the devil ever pulled is convincing people that shared hosting is preferable to dedicated, and then charging them way more money for it.
Is it really a pendulum, or is it more that this was always an idea with merit that's now finally seeing wider adoption because it's become more widely available? (In part, I understand, due to some IBM patents that expire 10 or so years ago)
I got a chuckle out of Apple M1 chip touting having shared video memory as a big step forward. (Which it is, but is still amusing to me how it might have sounded like a groundbreaking innovation to a layperson.)
In the PC industry, that's called UMA and has been around for a few decades, synonymous with ultra-low-cost (and performance) integrated graphics. To hype it so much as a good thing, Apple marketing is really genius.
Yes. I have thought about this a lot. There are cycles...
Like thin client (VT100), to thick (client/server desktop app), to thin (browser), etc.
Similarly, console apps (respond to a single request in a loop), to event-driven GUI apps, to HTTP apps that just respond to a simple request, back to event-driven JS apps.
It depends on how you define the boundaries, but history rhymes.
So when and how does the p2p / distributed pendulum swing back? When do we stop using AWS mainframes for everything?
I sense that you're right about swings requiring change to older techniques. But I think there's also a component of being fed up with the direction things are currently facing.
Unfortunately p2p computing is hindered badly by the copyright industry. The research is still active and we have a lot of ideas for distributed computing and p2p for more than file exchange. A lot is used today to distribute a mainframe infrastructure instead of creating truely distributed network.
Conventional wisdom is discrimination against privileged groups such as white men is less offensive because they’ve endured so much less of it.
On one hand, it’s true. It’s part of white privilege which is tangible.
On the other hand, however less often people in a privileged class are realistically impacted by discrimination, it’s still > 0.0%. Since it usually costs nothing more to include everyone it seems useful.
But I think the biggest reason it’s important to care about discrimination wherever it shows up and not let people off the hook is that it’s unifying.
There’s a story out of Buddhism that suggests it’s important to think equally kindly about rich people, kind of similar in that they’re a privileged class.
I know it’s a hard sell. I don’t do it justice here. However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.
If I get down voted I understand, that’s ok. If it makes a difference I don’t mean to minimize the 10,000 year history of pain suffered by any humans due to discrimination.
Racist people like you make peaceful protests and working for change against racism so much harder. You're just out for revenge and your rhetoric shows it.
Edit: I've had just about enough of people using "white privilege" to justify violence and blatant discrimination because "they haven't been exposed to enough". Its just another way to justify racism. Plenty of white people live in poverty. Its not ok in either direction.
> However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.
The powerful argument is that you should treat everyone well, period, and not do some kind of calculation to decide how cruel you're allowed to be to them.
White privilege doesn't exist and is not "tangible". It is just some academic theory which was accepted with no evidence and with no common sense. "Privilege" is a combination of many factors so even isolating it to skin colour as if it sums up everything about a person is just plain stupid. Not to mention that it all comes out of some American internal issue which doesn't apply to many other "whites" around the world. Even the concept of white is different in different parts of the world. Can American leaves us alone with this idiocy and keep it to themselves?
So at first you're sympathising with discriminating against those that you see as suffering less and then your big idea is treating people equally and that's a 'hard sell'. That is like being a basic good fucking human, it's not a novel idea.
Sorry if it was unclear that’s not what I meant to imply.
First, acknowledging the thinking behind a common opinion is not the same as sympathizing with it. It’s only stating a concept I disagree with.
Secondly, it’d be nice to take credit for this, big fucking idea, but unfortunately it’d be thousands of years too late. I explicitly mentioned the source.
Finally, I don’t see how it’s not a novel idea. If you started asking people to think kindly about rich Wall Street bankers or cable company executives would everyone he instantly on board?
I know those are extreme examples but that was the point of the story. What’s indeed not novel is to say, think well of all people.
The hard part is when you try to actually apply it equally, including to less popular but highly privileged classes of people.
I don’t claim that I can do it all the time, I’m sure I don’t in fact. However for any ideal shouldn't it be ok to try and work towards it over time?
I completely agree with this. For reference, I'm a relatively new developer - 3.5+ years of experience in my first developer position.
At the beginning of college everyone was SUPER into NoSQL. All my friends were using it, SQL was slow, etc.
Nearing the end of college and the beginning of my job I began seeing articles saying why NoSQL wasn't the best, why SQL is good for some things over NoSQL, etc.
Technology is cyclical. 10 years from now I expect to read about something "new" only to realize that it was something old.
The NoSQL trend was so terrible. Anyone starting out right in that time frame where mongo and other NoSQL DBs were getting popular was really done a disservice.
I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow". They will fight tooth and nail, shoe-horning features in to the application which are trivial to do with a single SQL command.
I helped out on one project lead by a few younger devs who chose FireStore over CloudSQL for "performance reasons" (for an in-house tool). They had to do a pretty major rewrite after only a few weeks once they got around to deleting, because one of their design requirements was to be able to delete thousands of records; a trivial operation in SQL, but with FireStore, deleting records requires:
> To delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. If you have larger collections, you may want to delete the documents in smaller batches to avoid out-of-memory errors. Repeat the process until you've deleted the entire collection or subcollection.
> Deleting a collection requires coordinating an unbounded number of individual delete requests.
Turns out, once they started needing to regularly delete thousands-millions of records, the process could run all night. Luckily, moving over to CloudSQL didn't take very long...
> I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow"
I mean, this is just dumb. I have less than 5 years experience and I understand that SQL isn't "slow", there are just different tradeoffs between SQL and NoSQL databases and you have to pick the right tool for the job.
I just have no idea why this trend got so popular because in my undergrad CS program, we all had to take a database fundamentals class. And once you actually understand how databases implement transactions, rollbacks, atomicity, etc. and when you use said SQL databases and see how fast the queries actually are, how in the world could anyone convince you that a non-ACID database with no defined schema is better?
NoSQL is very much like the databases that were around in the 1960s ("navigational" databases, nested sets of key-value pairs). E. F. Codd proposed a database of tables (which he, a mathematician, called "relations") to solve a number of problems that these primitive databases were having, one of which was speed.
The funniest part is seeing companies jump onto the distributed NoSQL bandwagon with their fundamentally relational and transactional data structures and then reinvent the transactional relational database.
> I immediately chalked it up to 65+ above white wise men snobbery.
Perhaps this can be the opportunity for you to look through your past and consider and reevaluate other ideas you discarded because of your own bigotry.
The Rails people never went for SPAs though. Releasing another server-rendering AJAX thing for rails (previous was TurboLinks) no more represents "the pendulum swinging back" than a new version of COBOL that runs on mainframes represents the pendulum swinging back to mainframes. If this approach gains market share against React etc., then that will be meaningful - but don't hold your breath, there are legitimate reasons for the move to SPAs and also an enormous amount of institutional inertia behind it.
Lots of rails back end applications power SPAs on the front end. Sometimes for good reasons, often enough just because it was more "modern" - but much less efficient in terms of programming.
I don't think that's entirely accurate. Lots of Rails users went the SPA route the second stuff like Backbone came out. Wycats was big in the Rails community at this time and he spearheaded Emberjs. The Shopify guys were (and are still) big in the Rails community and they created their own Batman.js. It's just that the Rails core devs made a decision to not go that route. They were even working on their own front end framework at one point and after some time they decided to kill it in favor of just using pjax/turbolinks. You can get your 80% case accomplished with these technologies with substantially less effort. There are definitely reasons to go SPA, but the dev community at large has jumped on the hype train here without really identifying that using these technologies are a good idea for their use case. I mean, there's a lot of people doing CRUD with React. That's crazy.
The interesting thing is that if you don't think of the browser as just another runtime, nothing more than The VM That Lived (where applets and flash died), but actually think of your applications as Web Applications, then you get the ideas behind this faster.
JSON is just a media type that a resource can be rendered as. HTML is another media type for the same resource. Which is better? Neither, necessarily, it depends on the client application. But if you are primarily using JSON to drive updates to custom client code to push to HTML, well, that should give you something to think about.
Rails never did, and even actual SPA frameworks (e.g., React) have had SSR support/versions for quite a while. Basecamp introducing yet another iteration of front-end-JS dependent mostly-SSR for Rails isn't a pendulum swinging anywhere.
This doesn't apply to all tech. Web just doesn't have a good solution because it's so complex. You'll always be making compromises. Some compromises are trendier than others at any given time. IMO, simple tech you don't have to think about, it's usage doesn't have "pendulum" effects. You forget it's there.
I also prefer boring and love how HN works. That said though, modern consumers expect much more. We tech people like command lines, the ultimate in simple and boring. Modern consumers often want animation and things[1].
Source: UI/UX researchers tell me this when I push back and say "let's keep the tech simple and forgo some of the animations, etc in the name of using simple OOTB stuff without hacking thousands of lines of JS together." Also family members will tell me the same things.
Consumers want electronic starters on their chainsaws – but they're less reliable, prone to error, more expensive, etc... in the end, us engineers gotta make products that people will buy.
I'm a developer and I definitely want that stuff some of the time. On platforms that can handle it, absolutely. I love all the animations in macOS and I always have. GNOME and Windows are nice but they don't feel like home in the same way, they don't have sophisticated and fluid animations.
They need to be performant and they need to improve the usability of the system, not reduce it. That's the criteria. It's often not achievable with the web, though people try.
Great question, largely yes web page / tech but I used to work on a Qt desktop app and it was the same story there. Luckily Qt made animations fairly easy and performant.
Not sure since when undiscoverable command line interfaces are simple. Simple to make, yes, but not easy to use compared to the most basic visual solution where its creator put a lot of thought.
Thanks for the link. I'm new to JS, but it seems funny to me how the first block of one-liner functions reads like it's just making up for how awful the JS standard library is.
Although it would be great if HN notified you of replies and had a dedicated place to see them besides "threads", which also show comments no one replied to.
LiveView does not need Redis, nor does it depend on any particular background job processing solution. It's plenty fast and minimal. On the other hand does not support or plan to support mobile use-cases. Others can chip in to expand on this ;).
This is exciting! I am less convinced by the Stimulus part though. I currently prefer the Htmx (ex Intercooler) + AlpineJS combo, or VueJS components sprinkled into views for heavier stuff (setup is a bit tricky but then it’s very easy to enhance views with augmented html). Am I missing something?
I agree with the need of standardization. But it’s just that I find the above mentioned libraries “hyper declarative” approach more in line with an HTML centric philosophy than the logic in a JS controller approach of Stimulus.
SPA moves all the routing/templating to the browser side.
server side rendering does it all on the server and spits out html(legacy cgi style)
between these two: static html template downloaded from the server, then use ajax to update json in the html pages, which is the old boring way but it might still be the best middle ground?
Unless I need make a desktop complex GUI program where SPA could be a reasonable choice, I will just do the html/ajax old way, not fully geared towards to server or client side rendering, life is much easier.
Most of the comments here are along the lines of: this is old tech!
But this isn’t really about the tech, it as much a productivity hack that allows people to be much closer to that old fashioned concept of a full stack developer
This is what has allowed Basecamp to create Hey with a team just a fraction of the size of their competitors
Somewhere along the way Mythical Man Month was turned from a cautionary tale into a competition, and it only ever seems to happen once VC gets their mitts on a company.
Does it really take a team of 800 engineers to maintain your fancy todo list app, rather than your focussed team of 10 or 20? What are all of those people doing except piling on organisational cruft so the business has no choice but to over-encumber itself with talent? Most of them aren't working on the product, that's for sure.
Yeah somewhere along the line people confused success with the amount of money you’ve raised and the number of people you employ. This just incentivises complexity
At scale, yeah it does take a lot of people to do this. It has nothing to do with the type of app and everything to do with how business works.
At scale you have huge enterprise customers with complex setups and they want to integrate your fancy todo list app into their setup and will pay you tons to do it. Your fancy todo list app suddenly needs to have all sorts of security certifications, needs to guarantee some type of uptime, needs to be able to integrate with all sorts of systems and other products (some of which none of your current engineers will have experience with). A new device will be released (think Apple Watch) and you're leaving money on the table if you don't hire some engineers to put your fancy todo list app on that platform along with the 5 or 6 others it's already maintained on.
At scale, 1 in a million type bugs/problems come up every day and you need more engineers to deal with them.
At scale, you are generating and producing (and thus need to be collecting and analyzing) an amazing amount of data, which should be used to better your product, learn about your customers and try to grow your business into providing other products and services your customers want / need. All that data collection, storage, and analysis requires more and more engineers.
There's only a handful companies out there who might be doing that, namely Google, Facebook and a few others. No other company is at the scale to meaningfully affect the market.
It's a productivity hack for companies that have more backend devs that strongly prefer ruby to javascript, like a presume basecamp team is... AFAIK this doesn't apply to the average web dev teams nowadays, most of whom never even used ruby before...
There's a plenty of people using RoR, but feeling equally productive in JS, and they'll benefit significantly less than backend developers who were forced into using it, but never really liked it.
So, what's your argument exactly? In every language there's a number of back-end devs who simply don't like using javascript, they want to do everything in a single language, and this is the tool for them. And that's great. I'm just saying that I don't think there's a significant number of them, that's all. Most of young people seem to be enjoying the modern javascript nowadays, and I see zero benefit for them in any of those tools. And the popularity of the tools like LiweWire kinda shows it, as I've heard a lot of Laravel people talking about it, but I personally don't know a single one who actually switched to using it for work in the end... maybe it takes more time, we'll see...
Turbolinks afaik, loads a larger part of a page upon hover and click, replacing all of the old html while this only updates the parts that has needs updating. This will prevent forms from being cleared or other state in the DOM to be wiped.
Exactly. Targeting specific sections of the page is pretty helpful when dealing with dialog content for example. If a dialog is open and you're managing something like the state of a form within that dialog, you don't want anything else outside of that dialog content being touched during a page update.
How do they intent to not have "Send massive files without using other apps" being abused? Isn't this one of the key reasons to use a separated system.
I can see this becoming a big problem on their ecosystem if it starts to scale it'll start to be very costly.
from a security engineer's perspective, these things seem great, but ultimately become really tricky where DOM nodes need to be produced safely. string concatenation doesn't cut it for obvious reasons, then you have mXSS from mismatches between the backend server and the browser's interpretation of the HTML
This is sort of ironic because many years ago what caused me to move functionality to client side was the fact that Ruby on Rails was slower on server side and required horizontal scale and things like memcached to meet my demands.
Laravel's crew had a very similar idea with Livewire (even the name sound similar, probably not intentionally) and it also created a lot of buzz back when it was released - but I don't think it really got past "let's play with it a bit" step for most of teams. We'll see how well this one does...
What are the pros and cons of this technique? Seems like this pushes a lot of the rendering compute power from the client to the server. I guess that is good for low power devices but now you need more powerful servers?
Am I the only one who was very, very surprised when the screencast immediately jumped into Rails? There's no mention of Rails anywhere on the landing page.
Ya same here. I've been doing web dev for 13 years and was genuinely looking forward to this magical HTML only solution of theirs but left feeling deceived after finding out it's rails.
This is a continuation of existing Rails ideas: "This is a conceptual continuation of what in the Rails world was first called RJS and then called SJR, but realized without any need for JavaScript. The benefits remain the same" [1]. So if this doesn't wow you, that's why; Rails already largely can do this, but you have to write some javascript yourself. This does seem simpler for the toy example. I'm curious as to how it works with nested templates, there are no examples for it that I noticed.
It took a decade for the industry to realize that JS frameworks were overhyped and the degraded productivity because of them was not worthy. I think they will be replaced by Hotwire-like technologies in the next decade.
The same thing will happen to micro services (especially, distributed monolith) and Kubernetes. They are just overhyped.
I don't think this is going to happen. Hotwire-like technologies have existed for a long time with turbo-links, adoption is low and it's very unpractical to work with compared to a split backend api + frontend (framework or not as you prefer).
The last decade in tech has shown that productivity does not matter most and I'm not sure the pendulum is swinging back (yet?). The only time productivity matters most is maybe small, bootstrapped businesses where cash isn't infinite and actual value delivered is what matters. In fact, the company behind this is Basecamp which is almost an outlier in tech at this point, as they're one of the few out there that actually makes its $$$ by selling products/services people pay money for, as opposed to endless VC money or acquisitions.
When it comes to VC-funded crap or big legacy enterprise trying to "modernise" itself, over-engineering allows grifters to twist the situation for their own personal gain. The business problem the tech is supposed to solve often warrants a simple solution, but why would someone solve it with a simple solution and 3 people when they can bring Kubernetes, microservices, "service mesh", blockchain and multiple programming languages to the mix and suddenly become an "engineering manager" managing 30 people, put big words on their resumes and speak at conferences about how they solve big (self-inflicted, as a side-effect of the overengineering) problems where more of their peers (either new and genuinely believing this is the proper way to do things - like I once was - or experienced enough to know this is BS but support it as it paves the way for their own career) encourage them?
This happens at multiple levels too, it's not just developers or would-be engineering managers. The funding side of things is also broken in the sense that you'll attract more investors and raise more money (some of which you'll keep in your pocket as salary, even if the company folds in the end) if you throw big words and pitch an over-engineered solutions such as blockchain as opposed to a simple and proven one (even though the latter is more likely to actually pay off).
As others have noted, seems reasonably similar to LiveView, Livewire and Blazor. I’m somewhat bullish on these approaches - server side rendered monoliths (Rails, Django, etc.) are SO productive, at least for the first few years of development, but lack of interactivity is a big issue, and this solves it well.
However, another big issue is the dominance of mobile. More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly
web, iOS and Android), and you want to power them all with the same backend. RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative. Mobile is a big part of why SPAs have dominated - you use the same backend and overall approach/architecture for web and mobile.
I’d strongly consider this for a web-only product, but that’s becoming more and more rare.
Everyone who is talking about how the route the industry took with SPAs was just a silly mistake, and that we should go back to the good old days of PHP are forgetting that at the end of the day the most important thing is to choose the best tool for the job at hand.
This, while very interesting and might have a preferable set of constraints for some projects, is simply not a good fit for many others, as you mentioned in your comment. This looks amazing, and I would definitely try it for a project in which it would fit, but I don't really see a reason to disparage the work others have been doing over the past decade. We need those other tools too!
> More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly web, iOS and Android), and you want to power them all with the same backend.
RESTful APIs serving up JSON works for all 3, as does GraphQL [...]. This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative.
I bet someone will produce a native client library that receives rendered SPA HTML fragments and pretends it's a JSON response. They might even name it something ironic like "Horror" or "Cringe".
That said, an ideal API for desktop web apps looks rather different than one for mobile web or native clients. Basically, for mobile you want to minimize the number of requests because of latency (so larger infodumps rather than many small updates) and minimize the size of responses due to bandwidth limitations and cost (so concise formats like Protocol Buffers rather than JSON).
It is definitely possible to accommodate both sets of requirements at the same API endpoint, but pretending that having a common endpoint implies anything else about the tech stack is rather disingenuous. If you want server-side rendering and an API that delivers HTML fragments instead of PB or JSON, that can be done too.
Isn't GraphQL supposed to solve this problem? You have one GraphQL API and each client requests only the information it needs. Maybe the responses are still JSON but I would think you would come very close to an API that serves all the clients.
Even without GraphQL, you can accommodate both sets of needs. I said as much. I'm also saying that the argument about the user-facing tech stack is bogus.
> RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific
HTML is a machine-readable format, like XML and JSON. Have your back end represent a given resource as microformatted-semantic markup, send it gzipped over the wire, and you've got the data exchange you need, even if your mobile app isn't already dressed-up webview.
Are you still referring to dedicated API routes, or are you talking about annotating your UI to the point where it can serve as the API as well? I remember the latter being the vision behind things like RDFa, but those approaches never took off, for a variety of reasons.
Generally the projects I've felt best about have two features:
1) The API knows how to represent resources across multiple media types, usually including at least markup and JSON.
2) UI is well-annotated enough that developers and machines find it easy to orient themselves and find data.
But you're quite right that this isn't common. I have my own guesses on the reasons why. My observation's been that the workflow and stakeholder decision making process on the UI side places semantic annotation pretty low on the priority side; most places you're lucky if you can get a style guide and visual UI system adopted. And there has to be cooperation and buy-in at that level in order for there to be much incentive to engineer and use a model/API-level way of systematic representing entities as HTML, which often won't happen.
> annotating your UI to the point where it can serve as the API as well
At that point you might as well serve XML and use an XSLT transform (+ CSS) to render the view on the client (yes, this is still possible without JavaScript).
For mobile, they used Turbolinks to release and maintain ios and android basecamp apps based on minimal os-specific chrome code and back-end web-based "pages".
This! I am not a fan of SPA everything. This looks like a great framework for web only. But as you said, what about mobile development? Sure they are still going publish Strada. But will it also work, for example, with Flutter without additional friction? How about services that need to consume output from each other? I believe parsing HTML is more expensive than JSON.
In the case of Rails if you're happy with a RESTfull API it handles serving different kinds content such as json pretty seamlessly via the respond_to method. i.e. if you want JSON ask for JSON if you want rendered html ask for that.
However I think that for iOS they're still offering server side rendering via Turbo-iOS and Turbo-Android so you can build quickly and then replace that later if you need to.
This is one of the primary promises of MVC in the first place: views can be rendered independently of controllers and models. For a given controller method call, a view can be specified as a parameter.
In this case, swap "view" for JSON sent back over the wire...
A few years ago, I had this bright idea of updating a site I maintained to be a single page application (SPA). I was very inspired by this site (https://pocketjavascript.com/blog/2015/11/23/introducing-pok...) and thought it would make the user experience a lot better by making the site easier to navigate.
I preloaded data such that a user could navigate via search or clicking through some drill-down menus without having to wait for a full page refresh. The data would hydrate the views so that the transition time between views was almost always fast, even on slow mobile traffic.
One aspect that was very difficult to figure out was incorporating views that would require more data from the server. Once a user drilled down to such a view, the challenge became how to load the view via AJAX without having to make another AJAX call to load more data.
The solution was to return server side rendered HTML instead of JSON...
Looking at the demo, this seems very similar to some of the things I had to do minus 95% of the hacky Angular/PHP code I had to set up.
This is a killer feature to have in Rails and I am looking forward to learn more of the conventions that will surround the implementation.
I'm a big fan of pure HTML pages because of performance and simplicity, but the techniques presented here are neither of those.
For performance, server-side rendering is just bad. It won't scale to high volume traffic, cost too much money and is complicated to maintain. Hosting a static HTML files is more performant than that.
For simplicity, the current protocol relies on so much "magics" to give the impression of simplicity. I think that's just an illusion of simplicity.
If you can get away with hosting static html files, there's not much else that's more performant.
The comparison is only between server rendered HTML vs backend API + some frontend rendering locally (which is not as performant) and more importantly is a lot more complex to implement technically.
Two points on client-side JS have been a constant for me for many years:
- Noticed excessive JavaScript contributes to poor web performance
- Things cause JS to break, which breaks other interactivity, often I end up using incognito
During lockdown, I struggled ordering groceries because 20 seconds of JavaScript execution on the main thread made my "delivery slot" expire!
Similar to heydonworks.com stating "Please disable JavaScript to view this site", I'm glad Hotwire is encouraging folks to think about things a little differently.
The introductory video is far too long, far too technical (I know a lot about web dev but don't know a lick of Ruby or Rails) and far too fast to follow. I was really interested in Hotwire because of the comments here and now… not so much anymore.
(Which of course doesn't say anything about the quality of the tech. I just thought I'd post my first impression of the website.)
I'm not from Basecamp but it's interesting you mentioned it's too fast. I noticed that too and this is coming from someone who listens and watches everything at 2x speed.
I've seen a lot of DHH's talks and demos and never had a problem listening to them at 2x. This is the only time where it feels like he scripted out the entire video word for word and then talked over a screen recording that was recorded separately from his voice. This video sounds nothing like what he normally sounds like (both tone and speed). I really wonder if he recorded this in a booth.
Pure JS developers trying to jump off the JS train have a few decent choices now:
- Elixir / Phoenix
- .NET / Blazor
- Laravel / Livewire
- Rails / Hotwire
Or hold on for the JS ecosystem to figure out good patterns that balance performance/maintainability (React Server Components look ok, and will probably trickle into Next.js for those who want a framework).
Does one of the above stand out as a good thing to learn from a career perspective, say, for a JS developer growing tired of frontend?
Is the thirst for React developers ever likely to dry up and leave devs who know these frameworks with an advantage?
So far, this looks very much tied to some Rails gems : documentation for installing outside of Rails is pretty much non-existent, I guess to be expected for a beta. My issue with Turbolinks/Stimulus is really that it's developed pretty much behind closed doors at Basecamp and we maybe get a shiny new release every few years, instead of being developed as a more open source project with incremental improvements and input from a wider group of contributors with diverse needs and insights than one small team at one company.
Generally people argue that static HTML is more energy efficient than server side rendering.
I wonder If that is true in the case of static HTML "that loads an SPA", versus a server-side rendering with a proper cache in place. The SPA loading will still consume good energy on each client computer. Would love to see some numbers.
Lots of hype, lots of projects and approaches, and lots of opinons about what is more convenient, but few numbers measurements of resources (space, time, what more?). May be It silly me, and the difference in energy terms is irrelevant.
My intuition is that network transfer uses almost no energy, on a marginal basis, because the network links are always on anyway. When you pay for bandwidth, you're paying the amortized costs of building and maintaining the infrastructure.
Rendering a SPA on a client device consumes CPU time, which definitely uses energy proportional to the number of client devices.
Assume the server uses less clock cycles to render than the client--e.g., because of optimized software, hot caches, or tuned hardware.
Therefore rendering on the server should always be more energy efficient than on the client.
That is my intuition also. I heard Vercel's founder on some talk claiming the energy-efficiency of their approach, and thought that should be proved with facts (mostly because of the "x clients" factor you commented). Thanks.
Isn't this replacing the complexities of JS libraries like react/redux-saga with the complexities of these new server-side libraries..? Reduced bundle size looks like a big advantage & this will hopefully encourage the adoption of lighter front-end tech like preact/inferno/svelte. But the downside seems to be more resource consumption on the back-end, especially if using websockets, and so higher cost & more vulnerability to a DDOS? Also, what about offline support?
Yeah, I'm interested about offline support. I might have to check out the HEY trial on web and on mobile.
I think this HTML SSR approach have different tradeoffs and uses compared to Single Page Applications.
The HTML SSR works best with a stable connection and being close to the server for good enough latency. Higher cost on the backend is a tradeoff on how much computation/complexity you want to offload to the client, and also what language/framework you want to work with more.
Some HN comments from previous related topics suggest these approaches (Hotwire / LiveView / Blazor? / StimulusReflex / Intercooler / HTMX / Unpoly) fare better with apps targeting a specific region so roundtrip latency is lower. One can optimize the reads with DB replicas + app servers closer to any user, but the write DB is the bottleneck (I think). This also ties in with the feedback that US/EU HEY/BC users perceive the app as snappy enough while AU users feel the latency a lot more.
SPA (properly built) should be more resilient in regards to network stability/condition/latency and expose a lot more offline capability.
I feel like the ideal app is HTML SSR + an app shell to serve some offline capabilities. Scaling the write DB globally is a hard problem though.
The overview video touches on how some components of the Hey app are aggressively cached, like menu popups and such - so I imagine your "offline" capability comes from the fact that anything you need to run the application offline is just cached from the last online use.
Okay this is a bit meta, but the whole cluster of "everything old is new again", "the pendulum of fashion has swung", "nothing new under the sun" takes is ignoring what tends to drive this sort of change: relative costs.
The allure of xmlhttprequest was that over connections much slower than today, and with much less powerful desktop computers, a user didn't have to wait for the whole page to redownload and re-render (one can argue that focusing on better HTTP caching on the server and client might have been smarter) after every single user interaction. This was also much of the draw of using frames (which were also attractive for some front-end design use-cases later re-solved with CSS).
As apps got more complex, clients got more compute, bandwidth grew, and as web audiences grew, offloadingl much of the page rendering to the client helped to both contain server-side costs and increase or maintain responsiveness to user interactions.
Now, as desktop client performance improvement slows down (this isn't just the slowing of computer speeds, computers are also replaced less frequently), average bandwidth continues to grow, app complexity and sophistication continues to grow, but as server compute cost falls faster than audience size grows, shifting rendering to HTML back to the server and sending more verbose pre-rendered HTML fragments over the wire can make sense as a way of giving users a better experience.
> The allure of xmlhttprequest was that over connections much slower than today
As someone who implemented a SPA framework prior to "SPA" being a word much less React or Angular, I have to say for my company, it was all about state management.
Distinguishing between web apps (true applications in the browser), and web pages (NYT, SEO, generally static content), state management was very hellish at the time (~2009).
Before that, pages were entirely server rendered, and JavaScript was so terrible thanks to IE and a single error message (null is null or not an object) that it was deemed insanity to use it for anything more than form validation.
However, with the advent of V8, it became apparent as an ASP.NET developer that a bad language executing at JIT speeds on the browser was "good enough" to not send state back and forth through a very complex mesh of cookies, querystring parameters, server-side sessions, and form submissions.
If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.
I don't know about you, but "cookies, querystring parameters, server-side sessions, and form submissions" to me are an order of magnitude simpler, though dated and not very flexible, than any modern JS client-side state and persistency layer.
Form submissions are brutally bad the moment a back button comes in. I remember so many "The client used the back button in a multi-page form and part of the form disappeared for them" bugs.
Yeah. I remember dozens of edge cases that involved errors and back buttons and flash scope and on at least one occasion, jquery plugins.
Or back buttons and CSRF tokens and flash scope...
Or, let's talk about a common use case. Someone starts filling in a form, and then they need to look at another page to get more information. (This other page may take too long to load and isn't worth putting in the workflow, or it was cut from scope to place the information twice.) So, they go out to another page, then back, and are flustered because they were part way through the work.
So, if you want this to work, you're going to need state management in the client anyway. (Usually using SessionStorage these days, I'd presume?) So, then, we've already done part of the work for state management. You are then playing the "which is right, the server or the client" game.
You accumulate enough edge cases and UX tweaks, and you're half way down the SPA requirements anyway.
Now, hopefully Hotwire will solve a large number of these problems. I'm going to play with it, but the SPA approaches have solved so many of the edge cases via code and patterns.
I don't think app complexity and sophistication grew that much. Most of the problems common apps are solving can be dealt with standard CRUD-like interfaces and old boring tech and it works just fine.
I think what drives this crazy train of overengineered solutions of SPAs and K8s for hosting single static page is deep separation of engineeres from the actual business problems and people they are trying to help. When all you have are tickets in Jira or Trello you don't know why you should do or if they actually benefit someone then it's natural to invent non-existant tech problems which are suddenly interesting to solve. That is natural for curious engineers and builders. Then mix in 1% of big apps and companies which actually do have these tech problems and have to solve them and everybody just wants to be like them and start cargo culting.
Agree. The transformation of software into ever-smaller and more specialised 'ticketing' is as much a result of managerialism encroaching in. Basecamp have this bit in their Shape Up handbook about responsibility and how it affects their approach to software:
----------
Making teams responsible
Third, we give full responsibility to a small integrated team of designers and programmers. They define their own tasks, make adjustments to the scope, and work together to build vertical slices of the product one at a time. This is completely different from other methodologies, where managers chop up the work and programmers act like ticket-takers.
Together, these concepts form a virtuous circle. When teams are more autonomous, senior people can spend less time managing them. With less time spent on management, senior people can shape up better projects. When projects are better shaped, teams have clearer boundaries and so can work more autonomously.
I recently wrote a SPA (in React) that, in my opinion, would have been better suited as a server-side rendered site with a little vanilla js sprinkled on top. In terms of both performance and development effort.
The reason? The other part of the product is an app, which is written in React Native, so this kept a similar tech stack. The server component is node, for the same reason. And the app is React Native in order to be cross-platform. We have ended up sharing very little code between the two, but using the same tech everywhere has been nice, in a small org where everyone does everything.
As a web developer 20+ years, I'm just happy to see how much job security I have since none of you are smart enough to manage a React app and Webpack config.
Products like Slack, Figma and VSCode will not benefit from this kind of technology. Get off of your old school horse. Not all websites are just websites. Performance is not determined solely by load time.
I think the complexity/level is too high. You could easily do the chat from the example video in vanilla JavaScript with a Node.js server and Websocket, that would also be faster and smaller.
The video was however a great tutorial, although a bit too fast. (I had to pause and replay a few times). But it did not sell me on the concept.
A more simple approach would be to send a (head) request to the server every 5 second asking if the source/HTML/state has changed. Then do a sophisticated diff/replace of the DOM (like in React, but without React) making the smallest change to the DOM as possible (only change what was updated). eg. no full page refresh, nor iframes. (and also send a diff over the wire, not the full page).
If you would make the app fully functional/pure you would then be able to cache many states (input->output) and predict them on the client and prefetch, for example a link click, would instantly do the DOM/diff and update the page with the new content. For example in a tic-tac-toe game you could pre-render all states and have the client pre-fetch them.
Likely an unpopular opinion. But anyway. I think the presented thingy is fundamentally wrong.
It's just another attempt to put the square peg in a round hole.
The point is: "The web" is not, and never was, a framework for application development! It's the wrong tool. No matter how hard you try.
You want to build applications? Use tools that where build from the ground up to support application development.
On mobile people realized mostly by now that "native" is the way to go. Because native apps are superior along any dimension compared to "web-apps". The same is of course true on the desktop.
Mature and full featured technology to build "native" cross desktop applications exists since forever. Just have a look at great tools like IntelliJ IDEA or Bitwig Studio for example.
Hotwire on the other hand is just another try to misuse "the web" as an application development platform. It's closer to what the web actually is (namely HTML pages) but it's still wrong on the fundamental level (as it tries to build an app framework on top of something that was meant to be only a hyper-text system).
Let's see how long it'll take until people realize this truth - and we're really full circle.
Best of luck getting paying users to install each individual desktop application, as opposed to installing a browser and then navigating to multiple applications without any additional desktop/native work.
I guess there might be space for something like Steam, but for business apps, but I just don't see it. The argument "This isn't what the web was made for" is kind of a cowpath argument. The web is being used to (among other things) browse applications. Whether or not that's what anyone planned is irrelevant. That's what's happening right now.
I think it's fair to suggest paths forward that both accommodate users and also bring sanity, but tagging every attempt at progressing applications-in-browser as "Square peg, round hole, need native app" just totally dismisses most of the users these applications are trying to capture.
People are installing apps for "everything" on mobile. Even for stuff where a web page exists and is fully sufficient.
The rest is just: "But with a big enough hammer we get the square peg into the round hole, so what's wrong with that? People are doing that the whole time. Seems fine".
Right, and when there is a viable app store that might be possible for desktop environments (the exact reason I brought up steam). Until then, though, desktop apps are just a unanimous negative most of the time (from a "get users to actually use our software" perspective).
edit I am speaking from the experience of trying to build a desktop app. I think people suggesting "Just build a desktop app" have no idea the amount of feedback that gets received to the tune of "Why do I have to download an app". It's staggering.
I can just repeat: That's a problem with the specific manner of software distribution on some operating systems, and not a general argument in favor of web-apps.
Yeah, it doesn't make sense to me. Consider this from the developer's point of view, and every single developer will tell you they'd rather make Javascript and some HTML rather than GTK application in C, especially when it'll work on every platform except maybe Plan 9.
For the vast majority of developers, C/C++ native desktop applications aren't a great idea. For the vast majority of businesses, C/C++ native desktop applications aren't a great idea.
If QT was going to become ___the tool___ for desktop dev, it would've done it by now.
So what? Modern web browsers turned out to be a great application runtime and the Web turned out to be a great delivery platform. Nowhere else in software engineering would you see so much resistance to a perfectly good tool.
Maybe the problem is that it's too good: people and organizations end up producing "web applications" instead of "web sites" just because the technology is so readily available and so widespread.
If there's one thing to complain about, it's that sites don't gracefully degrade when JS is disabled. I don't care that your browser game doesn't work without JS, but I do care quite a bit that I can't read blogs, news websites, etc.
If the status quo would work as great as alleged we wouldn't discuss the article as we just do... ;-)
Modern browsers are completely crazy bloat-ware. Bigger by lines of code than most operating systems! Things like the JVM become small and lightweight compared to a modern browser.
And still, it's not a "great application runtime" - as people still not even found out how to use it "properly" as one (as again: that's the point of the article we're discussing).
That people everywhere do irrational stuff all the time is also no prove that anything of that stuff makes any sense at all. Things don't become smart only because there are more people out there doing the same thing. Actually the mass isn't doing the smartes things at all usually. So that's imho a quite bad benchmark for anything anyway. ;-)
I can't remember the last time I opened a native email or word processing application on desktop. I much prefer the web experience. What makes it the wrong tool? What's the point of these silly rules?
You want to build applications? Use whatever freakin tool or platform your users will adopt. Don't be constrained by pointless prescriptivism.
But what about users who actually prefer some web applications, like the user you responded to? To them, the web application is the hammer. I mean really who gets to decide which is better besides the user?
For the user the underlying tech should be transparent. Users just want applications with common "look & feel". How that's done doesn't mater to them in the end.
That's the lesson we've learned by now in the mobile space.
But for the developers of those applications the how makes a big difference: Building apps with the right "look & feel" is much easier when using a tech stack that was purposely created for exactly that mater - instead of hacking something into existence by misusing some technology that was created in the first place to serve completely different needs. Because of this fact people mostly stopped trying to build "HTML apps" on mobile.
The remaining question is why this wisdom isn't taken to its last consequence. Which is, of course, that it also applies exactly the same to desktop applications!
That's actually the last step on our journey before going full circle. As most other comments point out we're straight on the way to "the past", "what's old is new again". What I'm pointing out is that what we're seeing here is just an intermediate step before the full realization of that fact.
So if you want to build the next big social app, sure you build a native mobile app. But then what, you build a _desktop_ app, because that's better on certain technical dimensions than a webapp? Should facebook.com have simply offered Windows and Mac installers instead?
I hate javascript as much as the next person, but when I had to step into .erb templates and regress back into javascript-as-<script> blocks in HTML, holy crap did I realize how far the FE landscape has come. Watching the Turbo demo, I cannot fathom going back to erbs.
It seems like DHH is just trying to make relevant tooling without writing Javascript, which I guess is fair, but given all of the advancements in all of the other tech, this really just does not seem like a good idea to me.
The downside is that this makes it really easy for developers to avoid making an actual API. The turbo frames _are_ the API. Great to bootstrap, but miserable if you ever want to implement a native mobile app (or any other client) on top of the same codebase.
1. There's a library to add this into native apps fast.
2. In the case of rails (Hotwire is not actually tied to rails at all) it's really easy to turn any endpoint into a RESTful API, just use respond_to to send json or anything else being requested.
I'm currently more interested in processing content as JSON serializable abstract syntax trees, whether it comes from Markdown, HTML, or something else. The popular Markdown renderer Remark (https://remark.js.org/) supports this. It's based on Unified.js which has markdown abstract syntax trees (mdast) and HTML abstract syntax trees (hast). https://unifiedjs.com/
I didnt see html, just ruby bindings. Anyone can make easy bindinds on server but it costs flexibility on components, imagine for deep nested levels customized components and syncing.
As a non web developer, writing less JS appeals to me, and I've found HTMX[1] to be simple to use. I'll add that the HTMX website/docs are much more approachable than this site.
We stopped doing this ten years ago because it tightly coupled the front- and back-ends and made it almost impossible to insert an API for mobile apps after the fact.
This is how I used to build web apps 20 years ago! :)
Render all HTML on the server side using a lightweight templating engine, make page loads fast and make all interactions reload the current page avoiding all client side JavaScript for validation and logic. (except for hotkeys for quick nav)
I guess this is similar to htmx.org (was intercooler.js) which I like a lot! Will learn more about it later and I'm glad similar idea is getting more popular!
Is the basic idea that you only transmit the new content when they click a link? Rather than a JavaScript application rendering it?
So it's effectively similar to just a normal HTML page, but without things like headers and sidebars etc. being retransmitted?
I guess I am old but I don't understand why this requires a multi-million dollar marketing effort and all of the other stuff they invented on top of it.
15 years ago RoR went against the bloated Java enterprisey frameworks, and after being dormant for some years now it's the time against SPAs.
Time will tell --I'm not so bullish on Hotwire-- but one has to admire figures the likes of DHH that can almost single handedly change the status quo of the industry.
The crucial missing piece in this and LiveView, Livewire, etc is optimistic handling of client-side updates. It is not okay standard behaviour to have to wait a roundtrip to visually add a todo list item on the client.
Afaik Meteor is the only major full-stack framework handling this. Any other takers?
This is a key point and I'd love to hear someone's thoughts on how you build an app with this framework that's snappy even on a high-latency connection.
I prefer to send updates to my web application as bitmaps. I do all the rendering of the HTML on the server to an image and just loop over the pixels and send a pixel if something changes from the last update. I call these updates “frames” and the client can even gracefully degrade — it can request a certain frame rate, or I can only send the luminosity value of a pixel, or randomly sample pixels to reduce detail for poor connections.
After playing for some hours with the new Turbo on a side project (with Django as backend), it mostly works fine.
One thing that is kind of broken out of the box is form submissions: so a common pattern with server side rendering is to re-render the form with validation errors, but otherwise redirect to wherever.
The workaround here I think is to use turbo-streams and just re-render the invalid form HTML snippet rather than the whole page. While this is probably ok for most cases, I have some forms (3rd party library stuff) where this is going to be more work than I'd like, and due to a bug in the beta release you can't override the default form submission behavior - Turbo throws an error if a form POST returns anything other than a redirect. So it's probably not quite ready for production yet.
reply