It might be less true for pixeled videos. I know from some IR camera companys, they use very small movements from the camera to calculate the picture in a higher resolution. And I think, the first picture from a black hole use also a kind of this technologie
> it should be possible to create images from a static video feed that have higher resolution than the video
You can resolve past the compression artifacts and noise, but you can only do higher resolution if the video is not static. Specific techniques from astronomy are things like Bayer Drizzle, to push past the resolution of the sensor itself via slight motion, but a perfectly locked off shot won't get you extra information.
You need a certain amount of pixels to represent the 1080p video in one particular frame. When the next frame comes along, your head has moved enough that you'll be seeing a different subset of those total pixels. At sufficient framerates (and motion capture rates), this actually does a pretty good job of approximating the full resolution of the imagery (especially when this is happening 2 or 3 times per source video frame, as the case might be with NTSC/PAL content).
They address that in the paper. They make use of the rolling shutter common in CMOS imaging chips to effectively increase the sampling rate to well above the frame rate.
Sort of, yeah. It's because of the same kind of limitation: You can't get the whole image at the desired framerate, so you only get a small slice of it over time. You can see the same technique applied in sonars, some kinds of radar etc.
Getting minute, subpixel movements can ironically give you MORE resolution if you process it over time, though you'd probably need some sort of "anchor" points
Stabilization into a smaller rectangle discards material around the edges of the original larger video.
So in a sense, this is yet another case of 'more data' (the oversized source video), plus software/CPU batch analysis, replacing the specialized equipment and expertise that used to be required for a 'stable' shot.
Taken to its logical extreme: might future casual-use 'cameras' be compact omnidirectional arrays of high-resolution, almost-always-on photo sensors -- from which idealized photos and videos can be reassembled later, by post-processing and editing at leisure?
Wouldn't filming a good quality screen with a higher refresh rate than the camera FPS defeat this method entirely? Especially so if the desired result is not itself high-def.
I wonder if there are interesting applications of being able to capture images at this kind of speed at high resolution. For example, using image analysis between frames to get some kind of depth map, based on subtle differences between each frame due to slight hand movements.
From series of images, aka. video, sure. From a single image? Not so much.
In video there is a lot of temporal information and even if the spatial resolution wasn't high enough in a single image, one would be able to accumulate a higher resolution version of the scene using multiple observations.
This seems like it would really benefit from a model trained specifically to upscale video. There's a lot of information you can get from the structure between frames that you can't exploit with a still image.
It's (probably) an unstable video, so you take the data from each frame for some large fraction of a second and you can create an image that's somewhere between the total number of available pixels in all the frames and the number in a single frame.
Note: While the spacecraft itself is moving at incredible speeds, the maneuvers themselves are slow enough that several (to many with a high speed camera) images can be combined.
Most importantly, the footage shot in the video was at 2500 fps native [1] - at 300k FPS you have to use a significantly reduced image size. FPS is bottlenecked by the time it takes to read out the sensor, which is linear in the number of rows in the image. So if you read out half the sensor, you can double your frame rate. You can exploit this to get thousands of frames per second on a cheap machine vision camera.
The camera framerate is largely irrelevant. While you can slow down footage by four times (neat), that doesn't mean you can get 300k fps footage out of a compact camera. The great benefit of this technique is really that you can convincingly fake super-slow motion without having to sacrifice sensor resolution. I can easily see this as a cool iOS or Android feature.
Maybe if they'd actually used footage that was at the limit of camera technology this would be more impressive (I don't see a technical reason why not?). Why not combine it with super resolution too? Interesting security implications, as cameras with frame rates higher than (I believe) 1M fps are ITAR controlled.
It does, but it’s the kind of problem you can throw computation at, along with adaptive optics. If you remember that photo Trump released, that was from a KH-11, which launched in the 70s (Hubble reused some of the design to save money) and they it generating data in the gigabit range back then so I’d imagine they have a high enough frame rate that you could get many frames to interpolate even for a moving subject.
They pour a LOT of money into that kind of capability so I’d be hesitant to say it’s impossible 50 years later.
The trick is to add the signal of many pixels together, forming a larger effective sample. The output video is very low-resolution and has a low dynamic range after noise removal, but can get hundreds of FPS in decent lighting.
Yes, I do mean interpolation. Just for the record, in the parts of the world where I have worked (Australia and France), a typical video stream, be it broadcast/pay-tv/dvd runs at 30fps or less. Recent televisions will interpolate to display frame rates up to 120fps.
Like you, I hated the effect at first, but I left it running because I do STB user interfaces for a living, and I need to know what people that don't fiddle with their television are seeing. It's jarring at first because for reasons that I don't fully understand, you are able to discern where the light sources are in a scene when using this technique. You can see that big reflectors have been used for outside shots, or that there is a spot shining just off camera. This is what gives it that cheap video effect, because as anyone that works with video for a living can tell you, the crappy quality of home videos is mostly due to poor lighting.
Cinematographers are going to have to work harder in the future to assure a more even light field in their shots. In the meantime, I have found that after using this type of display technology for more than a few months, you start to prefer the high frame rate to older technologies.
reply