Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Replaced JS with HTTP request (github.com) similar stories update story
66.0 points by jimaek | karma 575 | avg karma 4.11 2014-02-07 00:24:52+00:00 | hide | past | favorite | 29 comments



view as:

This defeats the purpose of defer and async [1]

[1] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/sc...


How so? It's the same as the old HTML, but with fewer round-trips.

defer and async are about eliminating round trip latency by opening multiple simultaneous http connections. Multiple TCP streams are sometimes faster than one. (I say sometimes, because negotiating a new connection has an overhead)

But then they are not loaded in sync. That's not the purpose here. Works for some situations.

Doesn't this defeat one of the purposes of Javascript CDNs, that the user already has the exact url cached on their machine?

I'd say it's more like a tradeoff than defeating the purpose.

And not as much of a tradeoff if a lot of nearby visitors have also visited the same site, or users visit it frequently.


I think that depends on user. Some use CDNs to improve performance via already cached files, others to offload traffic and use the actual CDN performance from Geo balancing.

Personally I don't think a user will have anything cached other than jQuery from Google. Which in this case I removed to follow the rule of eating own dogfood.


It kinda depends on statistics; I'd like to see some figures about CDN cache hits like that. I know the theory is sound, but I also gathered that one usually only gets 20% cache hits on websites with caching enabled. or something.

Ah, the mythical HTTP batch GET.

Last time I tried this, I ran into the problems of 1) unbounded URL length breaking down in old browsers, routers, etc. and 2) hobbled caching. (And also going against the grain of REST.)

I'd be interested if anyone has actually done something like this successfully. Did you have the issues I did? Was it worth it?


I don't think getting "batch" resources is against REST, Fielding himself stated that it's not a problem as long as the URL for the batch is consistent -- the batches are just separate resources, although they might have special semantics to the client.

In my opinion Google Hosted Libraries[1] are the best way to go if you're serving "big" commonly used JavaScript libraries to folks.

One of the major upsides is that it's so heavily used, that a user probably already has it cached in their browser. At least that's the idea, not sure of actual numbers.

[1]: https://developers.google.com/speed/libraries/devguide


And we add another nail in the "decentralized" web's coffin and give Google even more data.

Hosting a 32kb library yourself shouldn't be that hard in the days of 50mbps+ LTE mobile internet.


I agree with this sentiment.

For development purposes ('grab the latest version, from Google because it's convenient') I'd go with Google, for a production deployment not so much.


I agree with you but to pull number out of thin air. I would honestly be surprised if 50mbps+ LTE mobile internet is even 1% of the connected internet.

latency?

The link title is missing the operative word "single". It's about changing the number of requests, not the type of request.

I know. I am 100% sure I copy pasted the title from git but it somehow changed by itself.

HN changes titles to match the source title

But the source title is in fact more precise this time: "Replaced js with 1 single HTTP request".

The HN mods are well known to munge titles pretty much at random.

It's really not clear what, if anything, can be done to stop them.


Great, but instead, I'll take parallel files over SPDY, any day.

I'm very wary about loading JS from random CDNs. In my opinion, the negative aspects outweigh the benefits by far:

  - The CDN gets to decide *what* code is delivered to *which* users. Could be a prime target for, say, another FERRETCANNON.
  - If the CDN is compromised, so is your site.
  - If an attacker on a local network manages to inject poisoned cache data into requests for said CDN, your site is compromised.
  - All of your visitors are disclosed to the CDN owner.
  - If the CDN goes down, your site does so, too. Note that the inverse doesn't apply: the CDNs superior availability has no positive effect on your site.
  - Loading from another host may cause an unnecessary DNS lookup and will cause an unnecessary TLS connection.

What is FERRETCANNON?


I'll take stupid NSA codenames[1] for 10 points, please, Alex.

[1] http://www.theatlantic.com/technology/archive/2013/10/how-th...


What would be cool is <script sha="2afdb28d" name="angular.js" version="1.2.10" src="xxx"></script>

This would mean that browser can essentially cache the exact version of the script from any source, verify it with a hash, and still have a fallback URL to download it from.


But then the game would be to find an hash collision with a malicious code :)

It should definitely require the full hash. And to make it generic...

<script hash="sha-1/fa26be19de6bff93f70bc2308434e4a440bbad02" name="angular.js@1.2.10" src="xxx"></script>

The name attribute would be purely aesthetic, so no reason for a version attribute. SHA-1 is good enough for git content-addressed-storage, it's good enough for browsers.


Legal | privacy