Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

yes but only at the lower zoom levels ("far away"). they are working on higher ones later.


sort by: page size:

yes. and people have. you're right that its very low res!

The newest idea is to use "programmable" difractive optics instead of lenticular lenses.


I'm sure they do, theres even a construct to do this called OffscreenCanvas

Of course, but at a distance where the effect is noticable and the crop level required, you would be left with just a handful of pixels, so effectively, this is limited to high focal length lenses. And sice high enough focal lengths onyl come in telephoto lenses, this is basically limited to those.

Not right now. Sounds like they are working on lenses that could one day work with colored light for cameras. Maybe after that, they could be used for specials?

not yet.

The Lidar and infrared and AI will help with the low light. They fill in colors instead of take the result of the lens and sensor physical limitations.

There are apps that do what the default cameras don’t do yet, and we’ll continue to see rapid optimizations on this front.

what you mentioned is not outside of the realm of what can be done. If there is a need, making an app that does it will give you a brief advantage.


Yes but you have to precisely stagger the start of expositions of each camera and it's hard to do on consumer hardware.

I wonder how this line of research will be commercially exploited.

Although the reached depth of field was much shorter, this work reminds me about Lytro [0] Illum, a camera capable of refocusing the image after being taken. Announced in 2014 it received significant hype, but it never reached commercial success. One factor that hindered its adoption was the absence of a standard file format and the corresponding viewers for their "living pictures" – adopters were forced to publish their images on Lytro's website in order to allow viewers the ability to refocus the image.

[0] https://en.wikipedia.org/wiki/Lytro


sure, but it still going to use a video model, with additional sensors.

Yes, but is it able to "simulate" depth of field like the software of the new iPhone by combining the image captured by the two lenses?

No, when using optical zoom there is no mix of data. It uses just the second camera. I think they use both only when creating shallow depth of field shots.

yes but that's very hard and doesn't scale, (can't be cheaply shot from multiple angles etc.)

Ever since I first saw one of these I have been wondering if it would be possible to display as-shot light field camera images. If it is, they should seriously think about collaborating with Lytro.

Yeah, it's getting closer. With fast enough sensor they could probably do HDR+ on video too, like HDRx on RED cameras. If they had a 10bit HEVC video profile for recording, they could store more of that extended DR too.

Some things will be out of reach with a single small lens phone - depth of field on video, and selection of focal lengths/fov. And small sensors always have worse noise performance (quantum efficiency)


There are lightfield displays that can synthesise arbitrary focal planes, but they're still experimental afaik.

Hm The Canon DSLRs Can do that out of the box - at least 60D and 6D

Of course. The real challenge is making it look good with pinch-to-zoom.

I’m curious, if we had a thick enough frame could we integrate some kind of telephoto camera with a long optical zoom?

A. Yes, I'd imagine so, within reason. Makes for a less interesting demo, though - we've seen lots of images with large focal depth, and lots of images with a narrower depth of focus used to call out one thing, but we've never seen snapshots which you can refocus after they are taken.

This is seriously awesome.

B. This tech isn't going anywhere. If the camera succeeds they might license it. If the camera fails they will surely try to license it. Note also that the technique is apparently not wholly new so the key patents are already running down.

And your point about all those design choices that go into the camera cuts both ways. If they license this tech to a consumer electronics company that flubs the execution they will lose money, as the lousy execution will reflect badly on the tech and will prevent it from getting popular sooner. (The sooner every camera buyer wants this tech, the more profits there will be before the patents expire.) In a world consisting mainly of (a) Apple and (b) hardware companies that cannot design software to save their lives, keeping control of your own fate seems wise. The popularity of this technique among the general public will presumably depend crucially on the UI, both when taking the photo and when displaying it. Better to screw that up yourself than outsource the screwing up to someone else. ;)


I imagine the way these work is that there’s a fast hardware path from camera to display, with the OS alpha composited on top. Any kind of post-processing of the pass-through video will introduce latency. So my guess would be no zooming.
next

Legal | privacy