Even if the CSS is cached, your browser still needs to ask the server if the files has changed and receive back a "304 Not Modified," so you're still incurring the overhead of those connections.
> If the browser doesn't handle caching correctly, and they do change the source, frequently or infrequently, users may see broken pages and/or broken functionality.
Typically you just use a new URL to get around this ... Frequently changed cacheable files (css, js) are usually timestamped or contain version information in the filename.
You also get a big performance boost when you use the cache-control headers properly. Because you'll be saving the browser having to connect to the server to check if the file changed, and if you have many js, css, and image file, then you'd be saving a lot of round-trips.
True, but they can’t cache a portion of the page that happens to be shared across every page on the website. If I go to a dozen different pages on the site, any in-http-page styles will be loaded a dozen times, while a separate style sheet will only be loaded once.
That directive says - cache, but ask the webserver if the page has changed every time. If the server responds 304 not modified, it uses the cached version.
From a performance perspective though, people on good internet might be dominated by RTT so a 304 might be almost as expensive as a full 200.
That's not how you cache. You shouldn't have any 304s for images, js or CSS. If you are, your setup is wrong.
Mark minified js & CSS with a file hash in the path or query string & serve a far future expires header. Browser won't even 304 it and serve straight from local cache.
It's mostly a trade-off of what's more important to you: round-trip or bytes. The client might have a cached-copy, but it must (in most cases) revalidate it by requesting with cache-headers and getting back that empty 304 response. The round-trip cost depends of the actual scenarios, but the fact is that some resources, like css in <head> will block rendering. So you might have some better results by speculatively pushing the critical rendering path.
You can also speculatively push 304 responses btw, which would have the same effect in your described scenario minus the round trip. How you figure out if the client has fresh cached resources, that's still an unsolved problem. You can program it yourself in some servers. H2O server has its own mechanism for that. There's also an RFC for Cache Digest Assets frame that might extend HTTP2: https://tools.ietf.org/html/draft-ietf-httpbis-cache-digest-...
If you're just slinging static HTML, there's really not that big of a difference, and if it's a frequently accessed resource where a difference might be expected to matter, it will be cached regardless.
With some forms of caching it's much simpler: the browser sends an ETag or If-Modified-Since and the server is supposed to return 304 Not Modified to optimize the load if the cached resource is still valid.
> There are still requests sent, there is still network latency. [In the case of small JS/css/icon files, the latency can be as much as the transfer time].
What? When the "Expires" header indicates the item is still valid the browser should use the cached object without making any requests.
Some servers (Rails for example) just serve assets with "Expires" set to some distant future time. When they want to expire these files, they simply change the name (hence the long numeric suffix appended to js/css/etc. files)
Some (most?) browsers cache based on the Last-Modified-Time of the resource, I know for example Mozilla does this (it will cache for ((now - mtime) * k), and I can't remember if k is 10 or 1/10).
Given that, is it possible your css has an older mtime then your "/"? If "/" is dynamically generated then the mtime will usually be set to now, whereas the css is often a file on disk with a real mtime.
A browser can use be told to revalidate files, telling the server to return the content using the "If-Modified-Since" and "If-None-Match" headers. This way, the server will return 304 and empty content if the file has not changed or 200 and the file if it is new or it changed
An immediate thought on this- would be proxies. So it is no good relying on clearing the cache in the local browser, if it is being served a cached version from an upstream proxy server.
It is clearly a problem that needs addressing though. In our web app we have added build steps that seek out all css/js/image links and append version numbers as a query string to defeat the cache.
This works quite well, but has the disadvantage of always clearing the cache for each new release, when in reality not all items need to be cleared.
> I'm not a 100% sure but when I've used similar tools like this before the cached data coming from the web is the _content_ of the page, where the checked out repo I'm in is dealing with the _layout_ of the page.
Is this rendering the page remotely and fetching it? If so, it seems like the only point would be to reduce the server load on their end.
> Do you really need the contents of the newest blog posts to be pulled in each time you want to test a CSS layout change?
You would need something. At the minimum a "Lorem ipsum ..." to see how the text flows. What better than the real content for that? Especially if you already have it.
reply