Not necessarily, the decoding can be stopped once you get enough usable information in regards to the usage of the image (display size ...) with the same source image. That's neat!
> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design. Since there is only one file, the browser can start downloading the beginning of that file immediately, even before it knows exactly how much detail will be needed. The download or file read operations can be stopped as soon as sufficient detail is available, and if needed, it can be resumed when for whatever reason more detail is needed — e.g. the user zooms in or decides to print the page.
Does the javascript decoder download the images, or does the browser do that, then the javascript simply decode it? If it downloads them itself, then any parallel fetching by browsers will stop working. And if they are decoded by javascript, they will take (very slightly) longer, and both the encoded and decoded images need to be held in memory both in javascript, and the browser. Are they decoded to .bmp, .jpg, .png? Can CSS reference them without javascript changing the styles? Can they be cached locally if downloaded directly?
If you needed this for any page that loaded lots of images, all of the above would significantly increase page-load time, and memory usage. Especially on a mobile browser, it would use more resources (hence more battery), than otherwise. So personally I wouldn't like to see this be the future. Maybe just as a fall back, but detection for that might be a little hard.
If image decoding is asynchronous (don't know - guessing it is already?) and you decode them when they get near to the viewport and not just when they become visible, then it should always be decoded by the time you scroll to it and yet never jank the page. Scrolling a long way really fast probably means there's a small delay while it decodes images, but surely that's worth it to save gigabytes of memory?
I would guess, that decoding speed can become an issue for websites, where many images are already in the cache but are re-displayed rather often...
I write this, because I am building such an application and for now it has many PNGs ... and yes, using a format like BPG would be fine, because I use the PNGs only because of transparency ... but when redisplay is done via a Java Script, I doubt that I could have the same speed. Loading is not so much a limiting factor, since after some time, all relevant PNGs are already in the browser cache.
Can anybody say something to this topic?
Of course, it would be great to have this integrated into the mayor browsers soon ...
I don't know the current status of web browsers, ut hardware encoding and decoding for image formats is alive and well. Not really relevant for showing a 32x32 GIF arrow like on HN, but very important when browsing high resolution images with any kind of smoothness.
If you don't really care about your users' battery life you can opt to disable hardware acceleration within your applications, but it's usually enabled by default, and for good reason.
Doesn't a JavaScript implementation offset most of the performance benefits? Today we have browsers that are smart about when to cache the decoded image and when not, etc; does that have to be reimplemented in javascript?
>> Does the javascript decoder download the images, or does the browser do that, then the javascript simply decode it?
Right now, it would appear that the with a hard reload of the demo: http://bellard.org/bpg/lena.html on chromium, the BPG images are initially loaded with the page, then the posts.js file scans through the page's images to find the ones who's src ends in '.bpg', then posts.js re-downloads the same files with ajax, which are then decoded to PNG files rendered in canvas tags with no identifying information.
The extra request to re-download images is unnecessary but easily removed to just process the data loaded with the initial page load.
>> Can CSS reference them without javascript changing the styles?
Not from what I can see. The images are being rendered to canvas elements without any identifying information. The decoder will need to be modified to add identifying id's or classes. This again is an easy fix.
>> Can they be cached locally if downloaded directly?
This is what I perceive to be an issue. Try downloading the image rendered to the 5kb bpg example... The image that is being rendered is a 427.5kb PNG and that's for a 512x512 image size.
PNGs aren't lossy... so any difference in file size is going to be the product of rendering size or how much the bpg compression has simplified the source image. ( I'm guessing the file size follows something like a logarithmic curve where it jumps initially based on rendered image dimensions and then approaches a limit as the compression approaches lossless. )
Because of the rendered out PNGs' large file size, I would expect that if you are rendering out large images or a lot of images, you would definitely use more resources than with comparable quality JPGs regardless of whether you cache the BPGs for rendering and you would indeed experience the drawbacks you mentioned in both memory and page-load time.
That being said, this is a cool idea, an incredible proof of concept and I'm very thankful to Fabrice for putting it out there.
I have already found one case in Firefox where this is true. FF will load the image data in background tabs, but it won't decode it (e.g. render a JPG) until you switch to that tab. I may be nitpicking but the extra half-second or so bothers me like texture pop-in in a game. So I go to about:config and set image.mem.decodeondraw and image.mem.discardable to false.
> browser support for FLIF make this kind of thing irrelevant
I'm not sure it would. The nice thing about a lot of these SVG options (or even the smaller images as well), is that they can be embedded into pages. So not only do you see the image faster, you also reduce the number of remote file fetches. FLIF sounds like you'd still have to hit another server to see anything, even if the thing you see would display before it finishes loading.
It's a risk but implementers can take the sting out of it. Browsers aren't currently smart enough to do things like unload decoded <img> memory for things which aren't visible but you can avoid the worst of it if you use a CSS background-image (which browsers do unload) and a visibility test on scroll to avoid loading things which aren't visible or soon to be visible. This works as far back as IE8 so it might be worth the hassle.
I'm pretty sure it only discards the original after x number of other (new) images have been decoded. (Or perhaps it's memory footprint based?)
I ran into a Chrome performance bug years ago with animations, because the animation had more frames than the decoded cache size. Everything ground to a halt on the machine when it happened. Meanwhile older unoptimized browsers ran it just fine.
Firefox downloads image data but doesn't decode it until you browse to that spot on the page. Do you think that's what you're seeing? You can disable that in about:config, just toggle image.mem.decodeondraw to false.
Could someone explain to me why this is better/easier than just using CSS media queries myself?
Also:
> Whatever image you put inside the src of the image element will render by default. Then, the Javascript will progressively load larger images based on media queries that you pass into the data-interchange attribute.
So, for larger screens, there would be many requests for a single image? For example, if I drop a mobile-optimized image into the src and then view the page on a retina macbook, wouldn't this mean many image requests from the "mobile" version up through "full-size retina"?
Actually in terms of bandwidth JavaScript isn’t the problem, you can fit an entire SPA into the bandwidth required for one large image (it is a problem in terms of CPU usage on underpowered devices though)
Large images being “worth the trade off” is debatable depending on your connection speed, I think (though at least you can disable images in the browser?)
See the responsive part in https://flif.info/example.html
We can imagine decoding taking into account battery save mode, bandwidth save mode...
reply