Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Cached per browser, though, which is significantly different than cached per request.

Even if you're caching/serving static content efficiently it still adds load to a server.



sort by: page size:

What? Server rendered content is infinitely more cacheable, maybe I'm not understanding what you're saying.

In the boring old days we just did it all on the server and used the semantics of HTTP to handle caching ... it worked great.


Static sites are trivially cacheable. Caching gets cheaper per request the higher the volume.

If you're just slinging static HTML, there's really not that big of a difference, and if it's a frequently accessed resource where a difference might be expected to matter, it will be cached regardless.

I meant in terms of server-side caching.

Er, so they rolled their own custom caching system?

Isn't caching one of the first recommended ways to improve the performance of, well, any mostly-static content site?


But for subsequent requests the content is cached.

once it's cached though, loading new pages or new data is trivial.

It really depends on the use case- if I'm navigating my banking website, I would prefer a longer load time up front if it made it navigating between accounts really fast. If I'm checking my power bill, I'm probably only going to view one page so I just want it loaded ASAP


The article is about full page caching. How's that different than static content?

Caching can make it really fast, and this is good if you expect your website to change often (e.g. due to comments).

Otherwise static is 100% the way to go, if only to reduce how much code to deploy.


Ah, fair point. Now I see what you were trying to say.

Assuming the server has no way of determining which assets the client has cached (which depending on implementation may not be the case) you're of course correct. However, after step 2 the page has already fully loaded in both cases, so step 3 doesn't really slow anything down.


I don't understand how this feature even came to be. Presumably these resources are cached (it's going to be used for static resources; for dynamic ones, you'd need to have already performed the request on the server to figure out what to send, so you'd just send the response). So what, you're saving 5 ms off the first page load? Assuming it's not already a static response, in which case again you'd just send it.

Still seems like caching is the only real issue.

The complexity of dealing with headers seems easier than the complexity of routing constantly changing urls.


Then the server doesn't need to know about repeat-visits that don't hit it, and it would be nice to maintain caching support if the page content is static.

Imagine if you have a site that displays some sort of custom content when a client is hitting a 404 error. Even if it’s just a static page, the web browser still has to follow the code path and open it. Even if it caches the page internally, it still has to regularly check if the file updated on disk.

The amount of time spent by the web browser is fairly small, but the parent comment mentioned it in context of an attack, so that small per-page effort multiplied by many connections makes for more of a substantial load.


This is how caching works in the real world. Everywhere. For a CMS delivering mostly static data this is a perfectly fine way to do it.

But what happens when you need to cache the actual page not just static resources? That's what the content-addressed resources trick doesn't address.

Yes and no, it's caching at application level instead of network level with cache headers. HTTP caching is slightly more limited because the cache key is harder to tweak, so it's harder to cache paid content this way for instance.

I think the benefits of HTTP caching are often exaggerated, especially for APIs and single page applications.

Often I’ll want much more control over caching and cache invalidation than what you can do with HTTP caching.

I’d be interested to see an analysis of major websites usage of HTTP caching on non-static (i.e. not images, JS, etc) resources. I bet it’s pretty minimal.


Static content by definition doesn't change so this rule isn't for that case. Caching benefits even dynamic content, and dealing with cache invalidation is a separate problem than what is being discussed here. Your clients may always want the freshest content, but that often isn't scalable. If it is, you can put no expiration in your cache headers and let 'er rip. But you should still respect the idempotency of GET requests.
next

Legal | privacy