Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You could certainly build that in as a heuristic similar to how many browsers implement <video>: issue Range requests for a certain chunk size and stop once you have enough data. It's not as efficient as being able to get it all in one go but if you have a progressive image format that might be a better experience if it can start rendering quickly and fill details later.


sort by: page size:

My idea is more fun. To give a simple example: One can serve the odd rows of pixels first and construct the image when half the data is received. We can slice it horizontally and vertically as often as we want. Say 2784x1856 can be served as 100 images 280x190 one after the other.

When rendering a web document requires only a small version it needs only request the part it wants. Crappy resolution is desirable on a slow connection, with limited ram or a small screen. If the next customer wants to zoom in on it on his 4K display he can download the entire thing.


Totally. There are a bunch of ways to address the performance issue. As I alluded to at the end of the post there serious technology considerations when preprocessing so much image data.

We're currently looking at whether we can solve use IntersectionObserver for efficient lazy loading of images before the enter the viewport.


So in that case, each pixel would be stored as a separate row in a relational database? And to query the whole canvas you'd query a million rows on every read?

I lean towards just using the ratelimiting stuff we already have in place (via memcached, which we talked about in a previous post). We just overlooked it.


This is genius! I was just this week trying to think of some way to abuse slow-loading of images by generating the image as data comes in.

This is perfect.


I'd like to see even more responsiveness in the photo grid. I think these ideas are good, but could be taken further.

For example, while scrubbing fast, I shouldn't ever see blank grey tiles.

My scroll bar has say 1000 pixels of vertical height. That means there are only 1000 positions I can scrub fast to. They should preload all those positions in the form of a video with 1000 frames and then display the correct frame of the video as the initial low res render.

They could alternatively preload them in the form of a CSS/html grid with about 500 bytes per position, or 8 bytes per image. That should at least let them get vague gradients and approximate colours for the content of the thumbnails.


I'd be curious to know what the new algorithm is that dispenses with "chunk loading" allows all scene data to appear instantly.

Once you've built a system that works fast enough to handle image rendering work on-demand, then adding batch or precomputational features is largely just creating requester logic to ask for the image before the real demand arrives.

Cool, you could make it faster by preloading images.

I'll bet that most image requests will be within certain parameters. 2^x by 2^y. So you could probably pre-cache most real-world image sizes, and leave dynamic generation for 1-offs.

A naive approach that may still work well is to simply break up the image into fixed, predetermined regions. I don't believe this would be significantly more work for the server if it's already comparing pixel-by-pixel, and the average frame will probably contain updates only in one region. Even breaking it into 4 or 6 would, I think, be a significant payload reduction.

You can make it more efficient by capturing the images to a buffer and rendering a low FPS h264 stream, serving it as HLS fragments and displaying it with hls.js ( https://github.com/dailymotion/hls.js ) as a background element. There is a lot of inter frame compression benefits to be had from this sort of content + it will look more nice with a constant stream of frames. With a low FPS, the CPU usage should be low enough to not be noticeable and also, you can serve a static JPEG as a "preview" when loading the stream in the background for having a background picture on load.

at this point i'm considering creating a backend framework that sends a grid of images and uses imagemaps for clicks.

with fast internet raster images shouldn't be a big issue.


Have you experimented much with the timing and layouts? I think I'd prefer repeated images to stay in one spot and maybe get larger. Or would that be difficult to keep speedy?

I'd also like some way of slowing it down. Did I miss that?


Not sure if /s, but, to rasterize a 1920x1080 image you would make 2073600 HTTP requests? Sound reasonable.

From experience 100 images / min with chrome and puppeteer (you don't need Selenium) is more than doable with less than 1gb ram.

But you don’t have to do it every frame. Most apps could just render to bitmaps once and cache the results.

This is cool. I would highly recommend you sheet your sprites because ATM you are loading in each graphic individually via an HTTP request, which is pretty inefficient.

These days, slow-loading images usually mean that somebody hasn't bothered to use any of the automatic tooling various frameworks and platforms have for optimized viewport- and pixel density-based image sets, and just stuck in a maximum size 10+ MB image.

I imagine that with HTTP range requests, it shouldn't be hard to access original TIFF data. Javascript or WebAssembly can then work with arbitrary bytevectors.
next

Legal | privacy