Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Depth of Field (inconvergent.net) similar stories update story
129.0 points by ingve | karma 199255 | avg karma 12.93 2019-05-28 12:09:44+00:00 | hide | past | favorite | 23 comments



view as:

Why is the randomization necessary?

you get unwanted artefacts in the results if you don't use some randomness. having said that, there might ways of reducing the need for it (which would be more efficient).

It's not necessary, but it's convenient. What's actually happening here is Monte Carlo integration. Randomness guarantees you get the right answer. It prevents correlation artifacts. It also gives you a nice film grain look when you don't have the time to take samples until complete convergence.

Nice. I created a package to generate similar depth of field images in R, using a depth map/image pair. Also offers customizable bokeh shape and some other nice features.

Github: https://github.com/tylermorganwall/rayfocus Blogpost: https://www.tylermw.com/portrait-mode-data/


So pretty! I'm going to assimilate this purely for the aesthetic pleasure :D

I know this is so hot! And written in Lisp, I'm having a nerdgasm!

From his generative collection these ones really stand out:

https://img.inconvergent.net/img/gen/20170523-193637-305712-...

https://img.inconvergent.net/img/gen/20170520-230701-136920-...

It's almost hard to believe they are computer generated. They just seem so organic.


It looks like the links you posted are broken because the site owner doesn’t allow hotlinking to those images.

Could you post the link to the blog post with those images? I’d absolutely love to see them!



Thanks for linking those, Anders. jv22222 is right, they really do look amazingly organic. Love your work!

thank you. btw, the green one is rather similar to floraform by Nervous System. https://n-e-r-v-o-u-s.com/projects/sets/floraform/

If you don't mind me asking, can you recommend any good resources for getting into generative art?

I've been interested for a while and played around a bit, but I'd like to dive in a bit more since I have a bit more spare time now than I've had in while.


there are some references on my website. see the faq and the generative section. there are many ways to start. depends on your previous knowledge and how you prefer to work. the nature of code is a book that might be of interest. Nervous System have written up several of their projects. then there is the work by early generative artists. Vera Molnar, Frieder Nake, Manfred Mohr, Lillian Schwartz, among others. also, see https://github.com/terkelg/awesome-creative-coding/blob/mast...

Thank you for the recommendations! I’ll check them out.

Hitting enter in the address bar allows you to access them

I was accessing them on mobile earlier, so maybe that was the difference? I'm almost certain I did that.

Either way, the author posted the links, so I saw what jv22222 was talking about.


On Chrome mobile you need to put a '?' in the URL so it thinks it's a new site and doesn't send the referrer.

Since the theory isn't really described here, anyone who wants a tad more might look at the Ray Tracing In One Weekend booklet for a touch of the optics math behind depth of field. Chapter 11 derives a defocus blur. http://www.realtimerendering.com/raytracing/Ray%20Tracing%20...

> find the new position, w = v + rndSphere(r).

For anyone wanting control over the look of the defocus blur, also known as the "bokeh", random sphere gives you more samples toward the center than the edges. For a circular aperture, it is slightly better & more correct to sample a disk than to sample a sphere, and slightly faster to converge as well. You can optionally use a hexagon or octagon on the disk if you want to get the look of a film-camera type aperture.


i would say that what is correct depends on the intention

I'm talking about correctness in the physical optics/lens sense. With lenses, light always passes uniformly through the aperture, where sphere sampling is non-uniform.

Intention is totally fine. This is reasonable if you actually intended to do something different than what a camera does, or if the intention is not physical correctness. This blog post seems to be intending to do something easy for picking samples, as opposed to something optically correct. I'm all for easy, but I also think it never hurts to understand the tradeoff you're choosing, nor to present the harder alternatives.

It's also worth considering disk sampling rather than sphere sampling, because it's barely any harder, and it will make the code converge to the same quality something like 2x faster. Sphere sampling spends too much time in the middle and not enough at the edges. Disk sampling only takes a teeny tiny bit more arithmetic. Jittering & QMC methods will also help a lot with efficiency.


yeah, i'm going for easy. and also, easily explainable. i like to leave some of the details up to whoever tries it. which is also why i generally don't include code anymore. i guess i could have had more references though.

sphere sampling has the appearance i want. but sampling inside discs with a probability over the disc radius would also work, probably. i didn't try it here.


This could be made a lot clearer with an animated version either changing the focus depth or rotating the view...

Legal | privacy