Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Stop writing stateful HTML (www.colorglare.com) similar stories update story
40 points by enyo | karma 559 | avg karma 10.96 2014-11-24 07:49:20 | hide | past | favorite | 86 comments



view as:

Is it just me or is having the text "Due to the fact that JavaScript’s browser monopoly is hopefully coming to an end very soon," link to Dart kind of contradictory? Since Dart just compiles to JS anyways.

Compiling to Javascript is the way Dart solves its chicken-or-egg problem.

Chrome ships a Dart VM and hopefully Mozilla, Apple and Microsoft will follow suit one day.


Ahhh that's right, I forgot about the Dart VM. Thanks

Chrome doesn’t ship a Dart VM does it? The closest you can get currently is a Dart-enabled special build of Chromium I believe.

> Chrome ships a Dart VM

Google ships a special Dart-enabled version of Chromium (Dartium) with the Dart SDK. Chrome, however, does not currently ship (with) a Dart VM.


I stand corrected.

Native support was planned for Chrome. The other browsers won't implement it, never ever, for strategic reasons. [1] So no, I don't think it's going to happen.

That alone discredited the article a bit. How can one be so confident of something that is highly improbable?

[1]: https://en.wikipedia.org/wiki/Dart_%28programming_language%2...


In all fairness, I stated "hopefully". I agree that it is more wishful thinking than reality. It just seemed strange to me to call "BrowserScript" JavaScript, since it should not be limited to JavaScript in my opinion.

I will probably remove that statement though, since a lot of people seem to have extremely strong feelings about that, and are distracted from what this article is actually about.


Haha I just noticed TFA in my RSS reader 8 days late, and have been thoroughly confused by the majority of comments here.

I agree with the sentiment, but in fact Dart intents to replace JavaScript as a language. Dartium (based on Chromium) can run Dart scripts natively. Why it isn't included in Chromium or Chrome, I don't know, but they claim Dart isn't finished yet, so it would be too early. But if Google don't in some form exercise their dominance, then people won't be changing from JavaScript to Dart any time soon.

>Due to the fact that JavaScript’s browser monopoly is hopefully coming to an end very soon..

Aaand that's where I stopped reading. Try not to kick your articles off with dogmatic statements like this and, if you have to, at least make sure they're factual.


You have a problem with my username? ;)

EDIT: OK, I see, he misspelled it...


If I need to explain a joke, it probably wasn't very funny. I'll walk away from this one. This is just to say it was nothing to do with your username.

Just for info: I understood your joke, I just made my own on top of it. :)

You got farther than me. My entire browser window was filled with a headline and a background image, so I just moved on.

I'm glad I wasn't the only one that needed a few seconds to understand what was going on. I didn't bother reading it.

This fullpage image + small amount of text is a fad it will pass just as scrolling text went away in the 90s and "intro" pages when away in 00s (god i feel old)

In this case the background image adds absolutely nothing to the user experience or conveys what the content is about, if anything its quite an example of bad design and interferes with the user for no good reason! The "web developer" has added it just because he could not because there is any good and thought out reason to use this "image page monstrocity"


I'm sorry you feel that way.

This is my personal website. I also use it to showcase my photographs. I agree that they don't add anything to the content.


Then its ok :)

Its just funny seeing same mistakes being repeated and various "trends" employed by web developers for no good reason other than "because we can, and it looks cool", but thats ok for personal projects since you are exploring what can be done.


The people who started this comment thread claim to have skipped your content so they could critique everything else. I wouldn't take it too seriously. I found your content cleanly presented and interesting.

Thank you.

I stopped reading when he started calling JavaScript, BrowserScript. The entire article was in a huge font, on my 1080p screen it was difficult to read.

Technically, he wasn't calling JavaScript "BrowserScript", but instead using "BrowserScript" as overly-clever newly-minted jargon for client-side code in general, independent of language. Still, both that usage and the stated reason for it were pretty poorly conceived and distracting.

"overly-clever"? Not sure about that... BrowserScript seemed to fit. I didn't expect to get whipped that much over it. Maybe I'll just change it back to JavaScript and add a note that this can be applied to any other browser side language. I don't quite see what the big fuss is though, to be honest.

What's the problem? Wouldn't be better to have other choices? I think monopoly is never a good thing, especially for software.

It's not a question of whether it'd be "better" or not, it's a question of whether it's actually going to happen.

Do you _honestly_ believe Dart is going to replace or threaten Javascript "very soon"? Enough to be willing to bet on that?


I thought it was more a hope than a speculation.

The problem is confusing wishes with facts.

Past 10 years I saw many statements that "technology X will be soon dead". Most of those Xs are still alive and in good shapes.

You are very mistaken if you think that I wish for JavaScript to die. It is neither my wish nor my forecast. I just hope that other languages will make it into the browser at one time.

Other languages have made it into the browser, in much the same way that other languages than raw machine code have made it into computers in general.

JS is the common compilation target of the web, and also happens to be a language that many programmers find tolerable enough to use directly.

As nice as it is to have multiple options of source languages for the web, there's a lot less value in having multiple options for compilation targets, as well as greater costs (particularly given how tied execution engines are to browser engines, so that you really need a separate engine for each compilation target for each browser engine. Having a reliable, good-enough cross-browser target language that is a the intersection of the JS supported by each browser engine is easier than doing that same thing for multiple different target languages.

A new target language is going to have to offer really compelling improvements in execution to make sense to compete with JS in that role on the web.


In my opinion, Dart is the first browser side language, that has any possibility of changing this monopoly. I don't think that it's a good thing that JavaScript is the only language you get in the browser, but that has nothing to do with dogma (I coded a lot in JavaScript, and it has it's place).

As stated below, I will probably remove this Introduction, since it seems to distract from what this article is about. I hope you consider reading the article anyway.


When you put "Google" in a sentence with Dart, "Monopoly" starts to sound different. I read your sentence as :

"Google is changing the monopoly of JavaScript by implementing the Dart VM in their own browser."

As far as I know community doesn't care much about Dart [1] and they are concentrating on further enhancement on ECMAScript.

[1] : http://en.wikipedia.org/wiki/Dart_%28programming_language%29...




Glad its happen tbh I hate JS and so do many others. Its a fact JS is slowing down the web.

What about screen readers and all those device that support people with certain disabilities?

They maybe able to access the static version, but how about their participation? What if they need to login to get access to certain information?

I just learned that certain screen readers have some support of JS. But does it work in general?


My understanding is also if you make your site accessible to screen readers, and validate it with accessibility testing, then you will have good search indexing as well.

I agree the static pages would be faster, but I feel like there is dynamic content that should always be rendered. Login pages for example are often linked in a dynamic menu.


Sounds like a solution looking for a problem:

"Create an AJAX request to the location (eg.: /about.html). Change the URL in the browser with the history API (this way, the back and forward buttons still work in the browser). Show a loading animation that the content is now being loaded. When the content is loaded, parse it to extract the contents of #main (you can help yourself there by adding markers in your HTML) and replace the content of your current #main section with the one you just loaded. Make sure that you handle all the links in your new #main content so they will act the same and fire off any BrowserScript required for the page that just loaded."

Aside of usability issues (what if the user has JavaScript disabled?), I don't really get how search engine spiders will be able to index the web site.


I didn't quite get this either. It seems that a developer would also have to do extra parsing on top to determine which links are external and which are internal. The only impression I got is that he just created a more convoluted way of linking between documents.

Users without JavaScript (sorry, BrowserScript!) will load links as normal.

All the endpoints serve static html. I have no idea what the point in ajaxing it is (so you can stick in a pointless animation I guess). However it would degrade nicely, and search engines would be unaffected


Loading the content resources with AJAX allow you to preserve the current UI state. Depending on your website and your needs this step might be completely unnecessary.

It becomes necessary when you have elements that you want to persist on your page. That might be an audio player or the state of a menu that you want to avoid painting.

It's just about a better UX for your users.


> Due to the fact that JavaScript’s browser monopoly is hopefully coming to an end...

If this is so, author might want to view his page in a NoScript enabled browser. Why should the front image show itself through JavaScript if JS is on its way to meet dodo the bird?

Also, gray text on white background?


I'm struggling to understand why. If you have a complex app, the complexity will be stored somewhere - it might be in JavaScript, CSS, HTML, PHP/Ruby/Python, YAML config files, the database, or some combination of those things. But it will always be somewhere.

Sometimes, depending on the team and the project, it will make sense to store at least some of that complexity in the HTML.


The author specifically says the post is not about complex apps, but "typical websites".

Oh, you're right - sorry I actually didn't read that.

But you still hit on a useful point: that is, many "solutions" intended to improve design simply move the problem around. I first noticed this when Hibernate became all the rage, as well as some webapp frameworks.

They were supposed to be awesome because they didn't require code. Looking closer, they'd just replaced code with declarative config files that spread a given piece of functionality across multiple files/formats that were sometimes harder to debug.

Each layer implemented comes with additional complexity, not less. The real solution always seems to come down to simplicity and minimalism. To my mind, it's always seemed that limiting the number of technologies employed is a useful goal.


But he does talk about apps with authentication, rating systems, shopping carts etc. These in my opinion represent a complex app.

I have yet to see an app where putting all that complexity into each server side request simplifies anything...

Usually, it forces all kinds of artificial constraints on the app, such as atomic ratings counts when a looser, more cache-friendly requirement would suffice, etc.,


He's right. The silliest thing about how many sites are designed is that the entire HTML is re-rendered differently for each user (so that the username can be displayed somewhere on the page) for identical content.

Doing this is a horrible waste of resources.


Isn't making extra HTTP requests just to get the username even more wasteful?

You can get several things in one request if you want to. But yeah, it's a tradeoff. The main advantage seems to be cacheability, though that can get less important if you cache e.g. template fragments instead of entire pages.

Template caching is still server side which means you are using an application process to do the same exact trivial thing over and over which could have been offloaded to a cache or to the client.

You always need to do the server-side work, no matter where the data is rendered or cached later. Like I said, it's a tradeoff.

Yes, yes it is. For a number of reasons but the most overlooked one is that server CPU cycles and client CPU are not equally expensive.

No, not with pipelining or SPDY or HTTP2.

Also

No, not with any kind of intermediate caching (assuming not all resources have identical cache lifetimes)


> Doing this is a horrible waste of resources.

It's not really: we don't have a proper caching infrastructure anyway to conserve both CPU cycles and bandwidth by allowing the whole page to be cached and just minor fragments to be loaded dynamically. Back in the days, when most individuals/offices/ISPs would have some sort of cascaded Squid setup, it was more helpful.

As for caching in the client itself: possible and useful (provided the user will load the page many times per day), but not helping with the "re-rendering for each user" part as far as bandwidth is concerned.

The rendering effort itself is largely a non-issue. If your web framework is inefficient and you run out of juice on the server side, you can simply cache the static parts of your page with memcached or even use some setup with ESI on a load balancer.


Rendering is a non-issue? Tell that to RapGenius :)

This is what turbolinks does: https://github.com/rails/turbolinks. Nothing new, they have been doing it for 2+ years now.

Turbolinks is a 10% solution, not a bad one in all cases, but one that lets the user avoid most of the thinking needed to build a performant site.

There's little to no real content here. Progressive enhancement is not dead, and I hope it never will be. It's the Right Way to do things. Yes, even in 2014.

Oh man, don't roll your own linking, caching, and history. Use what you get for free from the browser. You have better things to spend your time with.

The way to achieve this "nirvana" is really quite straight-forward: render all items on the server. No, not as a page-view but as the the individual items making up your page. Now, when people visit "mysite.com/index.php?search=&page=1" for the first time the server knows because the req arrives as a GET, hence it renders the whole page using the individual items (and the JS on the user site internalizes this state). When our user now navigates to "mysite.com/index.php?search=&page=2" the JS forwards the req to the server via a POST. The server now responds with ONLY the changed data which the client-side proceeds to insert/update where needed. This is short edition but this is how I do these things at the moment and it works.

What data are you POSTing?

Server side rendering has significant upsides but I feel one of the biggest downsides is consistently ignored when discussing it, namely that when you render (parts of) pages you burn expensive server side CPU cycles rather than the free client side cycles which in most cases noticeably affects hosting/scaling cost.

Those free client-side cycles have a hidden cost - slow rendering times, especially on mobile. If you've done your work right, server-side is scalable.

It's not so much a scalability issue as a cost-per-user issue. Obviously client rendering performance (or lack thereof) is a concern and if server side rendering solves that for a manageable increase in cost that's perfectly fine but that cost has to be part of the conversation.

OK, I get that.

I think that Javascript can do some awesome things client-side (and I have used them to do that), but when you leave the server, you lose a lot of predictability and that can ruin the user experience.

I worry that if we push too much onto the client, then we're asking them to do all the work while we sit back and take advantage of them, for marginal benefit to the user.


"free client side cycles"

For certain definitions of free.

I've found that sites which abuse this tend to be much less responsive and can even cause my laptop fan to start spinning like crazy


I agree. I was referring to the actual cost of the cycles but there are many situation where you need to find a balance between cost and fluent user experiences. My point was primarily that the consideration that server side resources are more expensive should be part of the evaluation.

This article is quite confusing, I'm really not sure which point the author tries to make.

Anyway, reading on to the implementation section, this stuff sounds pretty standard, and people have been doing this for many years. Some implementation details changed (pushState, hashchange, etc), but this approach is ancient.

I mean, if you just want to describe a simple technique, there's no need write a long, vague and controversial introduction..


This full-page-jumbotron-without-any-indication-you-should-scroll-down thing (is there a proper name for that) has to stop.

without-any-indication-you-should-scroll-down

Doesn't the scroll bar serve as an indication?


OSX doesn't always show it (depends on your settings). That being said, it seems like common sense to try scrolling on a webpage (especially coming from HN where these sites are dime a dozen).

It seems like common sense that a blog entry should show me substantive content immediately. A cover page is sensible on a book, because of both the size and medium, but its ludicrous on a short essay on the web and what it says to me is that the author doesn't respect the reader.

Cover pages are common in magazine articles too, why would they be disrespectful?

I should have stopped reading when the author linked to dart.

Regarding SEO/performance for serving static HTML, many frontend frameworks provide a mechanism for rendering content on the server. For example, React.renderComponentToStaticMarkup (http://facebook.github.io/react/docs/top-level-api.html#reac...) can be used to render each static page on the server before sending it over the wire.

Also, I'm not sure how you reduce complexity by coupling view logic with backend datastore logic. In my experience it's far more maintainable to separate the backend and frontend, communicating between them via a REST API/web sockets. That way, everything is independently unit-testable and can be integration tested as well.


I really hope this isn't the day a new horrible term is born. "Stateful HTML"? It looks to me as if the writer is trying to talk about the "isomorphic JavaScript" principles but from a different perspective (the HTML). Am I wrong?

tl;dr use pjax [1]

[1] http://pjax.herokuapp.com/


I have never really looked into dart before, but i'm curious. Has anyone here used it, what are people's thoughts about it?

People have been trying to offload dynamic page generation to the client (browser) for as long as JS exists. It became less attractive when search engines became so dominant and important for websites' success, because they couldn't handle it. Nowdays Google does some JS, but in general, it's better not to do this yet.

What the hell is BrowserScript?

Having "stateless" HTML - meaning the base HTML document is the same for all visitors, across hours of time - can have significant advantages for performance and scalability. Your HTML is now cacheable, and you can host it on a dynamic CDN like Cloudflare, and your TTFB can be really fast. And it will also remain fast if your site gets hugged to death by a spike in traffic.

The downside of course is that rendering different HTML based on your user, their location, cookies, user-agent, etc. is really freakin' convenient. And if you commit to static HTML, you have to do all of your customization on the client, and load real-time data with an AJAX call.

But if you want a highly-performant site, the tradeoff is worth it.


Legal | privacy