Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Next-Gen Lighting Is Pushing the Limits of Realism (kotaku.com) similar stories update story
162.0 points by RaSoJo | karma 783 | avg karma 3.35 2014-08-22 04:37:11+00:00 | hide | past | favorite | 72 comments



view as:

Lighting does a lot to improve a scene, but unfortunately I'm almost certain that it's pre-rendered lighting, which isn't all that interesting since any dynamic objects present ruin the effect.

Not that this is any sort of proof but if you watch the last few videos you can definitely see the shading change on the tree leaves that move around in the wind.

Trees are static though - they only deform - they don't transform around the scene itself.

Not sure if that answers your doubts (as I'm totally not into rendering), but from the forum thread[1], about one of the videos apparently (not sure which, or all?):

> is that video was rendered real time?

> Yep (well, the lightmaps are pre rendered but it take like 10 min. In engine, it runs at 50-60fps on a gtx670)

[1] https://forums.unrealengine.com/showthread.php?28163-ArchViz...


Interesting choice with the Mies Van Der Rohe Pavilion, because as stunning as it is in real life, it's simple geometry actually makes for a bit of a boring tech demo. Still looks very impressive though.

Thanks for pointing this out. I was under the assumption that the locale was the artist's own. Would be interesting to see a tackle at animating the pavilion at night: http://cdn3.vtourist.com/4/3600745-Barcelona_Pavilion_of_Mie...

Wondering what the complications would be...


I this is a bit of legacy ( like the Stanford Bunny ). One of the seminal work of lighting was Photon mapping by Henrik Jensen , and he used Ludwig Mies van der Rohe's structure as an example ( http://graphics.ucsd.edu/~henrik/animations/jensen-the_light... ). Keep in mind this was done 14 years ago! It was a genuine wow moment for me back then!

Should be "Next-Gen Game Lighting".

This is just Physically-based shading basically, which has been done for the past 4/5 years in VFX. Essentially it's energy-conserving materials with the correct fresnel effect based on the surface's IOR which takes things to the next level (for games at least). Doing this properly for layered surfaces (e.g. diffuse wood layer with a clearcoat varnish layer) gives very nice looking results.

Some complex materials can have up to 3 spectral BSDF lobes for reflection, which can only really be done with pathtracing.

For VFX, people are starting to push into spectral rendering now, and trying to optimise things like volumetric rendering for things like SSS which are needed for ultimate realism.


What?

OK, I'll don the Captain Jargon outfit and give that a go:

- VFX is just "visual effects", http://en.wikipedia.org/wiki/Visual_effects.

- Fresnel might be a reference to "Fresnel refraction", http://en.wikipedia.org/wiki/Fresnel_diffraction.

- IOR is of course "index of refraction", http://en.wikipedia.org/wiki/Refractive_index.

- BSDF references "bi-directional scattering distribution functions", http://en.wikipedia.org/wiki/Bidirectional_scattering_distri....

- Path tracing is a Monte Carlo method of rendering images with global illumination, http://en.wikipedia.org/wiki/Path_tracing.

- SSS means "sub-surface scattering", http://en.wikipedia.org/wiki/Subsurface_scattering.


+1 - sorry, I do this stuff all day, so am used to the slang :)

Fresnel also effects just reflection as well - it's the reason you don't often (depending on the material) see much of a reflection head-on on a shiny surface, but do at a glancing angle (glass or car paint for example).


What do you work with?

Renderers and shaders for one of the biggest VFX companies in the world :)

I recently implemented Fresnel reflection in my mobile game engine; it's one of those things which makes an objectively small difference but a subjectively large one.

> the reason you don't often see much of a reflection head-on on a shiny surface, but do at a glancing angle

Ah! I've noticed shiny things in real life that made me think "if I saw this in a game, I would think the reflection was overdone." I think what you've mentioned there is the key. The reflections I remember from early 2000s video games seem overdone because they're reflective at all angles. In real life, the shiny marble floor is only really mirror-like at shallow angles, when looking head-on it only appears glossy.


It's been around a lot longer than that (15-20 years?) but only been practical for video and VFX more recently. People have been doing architectural images etc. with pretty complex BSDF for ages.

For those who are interested but new to this, some of the earlier global illumination stuff is a good place to start for the fundamental ideas, because they aren't concerned so much with fast paths in the rendering architectures.


Physically-based shading in games is done very differently from VFX. We're still mostly approximating instead of trying to simulate. We don't event call it "physically-based shading" as that would be lying, our version of it is "physically-based rendering."

Your graphics package physically simulates the light, gamedevs simply try to emulate what would happen were it physically simulated.

PBR has been done before UE4 (The latest CoD has it and I think Frostbite 3). I don't think PBR is directly responsible for what you are seeing in UE4 (it certainly adds to it), it's likely using more recent developments such as deep GBuffers (http://graphics.cs.williams.edu/papers/DeepGBuffer13/).

Gamedevs want to be able to do what you guys do in VFX, but we can't because we have the realtime restriction. We takes leaps like this when we figure out how to do fast approximations of what you guys are doing (and deep GBuffers are such a leap).


Yeah, I think the difference is "Physically-based" and "Physically-plausible" :)

Although strictly speaking a lot of VFX rendering is still about how much you can fake stuff, although it is a lot more real (as in done in the renderer instead of at comp stage) than it was 5/6 years ago.


Kotaku is a video game blog, so "Next-Gen" specifically refers to games running on Playstation 4, Xbox One and sometimes high-end PCs. Specifying "Game" on Kotaku is kind of redundant as almost all technology related posts are referring to games in some way. I could see it on a general technology, art or VFX site though.

The siggraph physically based shading course is a great resource if you're interested in the techniques people are using in production.

2012 - http://blog.selfshadow.com/publications/s2012-shading-course...

2013 - http://blog.selfshadow.com/publications/s2013-shading-course...

2014 (not yet available) - http://blog.selfshadow.com/publications/s2014-shading-course...


The irony being that the point of the article is that if you didn't code for "Next Gen" consoles you'd get all this lovely rendering. The article title is the opposite of what the story is about!

It's worth noting that these scenes take around 10 minutes to render a frame, so it's still a long way from real time.

Detailed in the thread - https://forums.unrealengine.com/showthread.php?28163-ArchViz...

Very pretty though still!


Hm, from another post[1]:

> is that video was rendered real time?

> Yep (well, the lightmaps are pre rendered but it take like 10 min. In engine, it runs at 50-60fps on a gtx670)

So, if I understand correctly, that's 10 minutes for some "lightmaps prerendering", but then "real time scenes rendering"? But I'm totally not into rendering, so sorry if that's what you meant, and if that's something obvious I'm just missing.

[1]: https://forums.unrealengine.com/showthread.php?28163-ArchViz...


To add to that, the prerendering is done as part of the preparation of assets - if someone were to download and run this project then the assets are ready to go!

Ah, I missed that, I thought he meant 10 seconds each frame when running.

You are 100% correct. Saying that it isn't realtime due to the baking is like saying Linux isn't fast because it takes a day to compile.

Baking = compilation of game assets Lightmap = object file for lighting


I think he said only the lightmaps were pre-rendered and that took 10mins. The actual scene runs real-time at 50-60 fps on his gtx670.

(Prerendering lightmaps is typical for games for scenes with static lighting.)


This is incredible. It's only a pity that many straight lines are just too straight.

You see many crooked lines in real life architecture when the wall/surface is supposed to be straigt?

In real life architecture the wall/surface is never perfectly straight. This is one of the most common signs that give away 3d renderings.

>the wall/surface is never perfectly straight

Where you got the idea that a wall is not perfectly straight? You mean some marginal differences in curvature that the human eye cannot distinguish from a straight line anyway?

That would be trivial to mimic in a 3D renderging anyway, literaly just a modelling command away.

I'll have to agree with TFA that it's the lighting and textures that give away 3D renderings.


In my experience the human eye can spot the lack of that kind of imperfections very well. Not quantify them - just notice that something is wrong.

Regarding walls (and edges in general) not being perfectly straight, apart from what I learned from my engineering studies, I've done my fair share of home repairs :)


Drool... Very impressive especially in a video game engine? I wish I knew more about the 3d world and could do stuff like this....

If you're interested in it, you can work directly with the Unreal4 Engine at an extremely low cost ($19 per month for access to everything, source code, tools etc).

I ran a gaming startup in the days of Quake. It has really never been easier or cheaper to acquire access to incredible tools, learning material, and technology to get into the 3D space (whether for gaming or whatever).

Easy enough to at least download the Unreal 4 engine and begin messing around with it, to see exactly how deep your curiosity goes. You'd find out quickly if it's something you're really interested in.

Every time I see the Unreal 4 engine on display it tempts me to get back into it all (I'd go toward VR now).


Ofcourse the images look very nice but I'm not very impressed. A lot of artists create photo realistic images. For example the Ikea catalogue[1] has a lot of CG images.

These are just pre-rendered textures and light maps combined with real time lighting and reflections in Unreal Engine 4.

But I agree that UE4 does a very very good job at realistic real time lighting! Also take a look at the blog of Paul Mader: http://paulmader.blogspot.nl/

[1] http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpe...


What about the changing angle of light incidence in this fast-forward scene?

https://www.youtube.com/watch?v=rOkJ1-vnh-s

It somehow looks like it would be dynamic diffuse indirect lighting. Wouldn’t that need to be pre-rendered for each position of the sun then? Or it’s just really good use of shadow maps, ambient occlusion and screen-space specular reflection:

https://docs.unrealengine.com/latest/INT/Engine/Rendering/Po...

Edit: This particular scene uses light propagation volumes, in the other scenes it’s UE’s pre-computed (hence static) Lightmass GI, as parent comment said.

https://docs.unrealengine.com/latest/INT/Engine/Rendering/Li...

https://docs.unrealengine.com/latest/INT/Engine/Rendering/Li...


What UE4 can do is very impressive. But when it comes to real time photo realism I think path tracing is even more impressive. For example this Brigade 3.0 demo: https://www.youtube.com/watch?v=BpT6MkCeP7Y

I'm not that impressed, that noise is really anoing and the detail level is not that high. There are no interesting surfaces and no demo of excessive refraction or reflection. I'd like to see how a stature of glass would look like with this renderer.

Why don't they add a very cheap real time renderer that produces fast and high resolution images and later add the low resolution output from that path tracker? I guess that would make noise a lot less distracting, as there where no black pixels anymore.


I can't find it ATM, but I think I saw a technique that did exactly that using something like joint bilateral upsampling: http://research.microsoft.com/en-us/um/people/kopf/jbu/

https://www.youtube.com/watch?feature=player_detailpage&v=ZZ...

The filter_indirect in this demo does exactly that.


Wow, I had never considered the IKEA catalogue being so much CG. Makes sense, really.

Now I've got a reason to look through it and check the pictures if I can spot the CG (but judging from that article, and their high quality standards, I kind of doubt I could).


If I were an architect, I would be using this to pitch to clients. I know virtual fly-throughs are a thing, but the existing demos I've seen just don't give the same sense of presence as this does.

Indeed, architects have definitely been using the Unreal Engine for quite awhile, even before it started looking this good. Here's an article from 2007:

http://www.businessweek.com/stories/2007-12-21/unreal-archit...


I once worked for a company creating pool designs and entire backyards for pool and landscaping companies in UnrealEngine 2.5.

That's nice. How many game or film studios will be driven bankrupt by the yet-again increased cost of content production?

But yes, Blood Soaked Shoot-em-Up 93x-treme is going to look very realistic. It will be almost as if I was really invading Iraq!


Look on the bright side; it's good news for the people with the content-creation skills. It isn't as though the ever-increasing complexity of software has had a negative effect on the industry.

I see another avenue for this. "Reality simulation" - not virtual reality - will, I think, be something to explore. Especially with something like Oculus down the line, there would be a market for it. Why surround yourself with your humdrum home when you can live in a much better one with better surroundings. Watch TV virtually. Read a book virtually or even take a nap on your virtual sofa in front of a magnificent view while on your "real" bed surrounded by boring walls you don't see.

That's just... ew. No. That's just wrong. The average person's living conditions shouldn't be so poor in the first place that he feels the need to step into an Oculus to get a nice sofa.

Living conditions don't need to be poor. Think of it as a hyper-realistic Second Life. There's a reason Sims are popular too.

I'm pretty sure that on an absolute scale, Second Life and The Sims are not actually very popular at all.

Not sure why you're getting downvoted--I think you've hit on the point excellently.

This reminds me of playing with POV-Ray years ago. I was so amazed what could be done on a low powered PC, granted it took a long time to render, but still was still somehow magical, like the first time you realize you can program a computer. POV-Ray is free and can create some cool realistic looking images[1]. This one is my favorite[2].

[1]http://hof.povray.org [2]http://hof.povray.org/images/ChristmasBaubles.jpg


Something of an aside: I'm most stoked about this kind of thing because of the oculus rift. Even just playing through HL2 with my DK2 is an amazing experience.

What I'm really interested in with things like the Rift is foveated rendering, in order to get much higher quality graphics out of less computing power. Basically, using high precision fast eye-tracking to only render the portion of the screen your looking directly at, since your eye can't resolve detail to any great degree outside of a fairly small area in the center of where you're looking.

Foveated imaging in general: http://en.wikipedia.org/wiki/Foveated_imaging MS Research paper on foveated rendering: http://research.microsoft.com/pubs/176610/foveated_final15.p... MS trial on foveated rendering: http://research.microsoft.com/pubs/176610/userstudy07.pdf


For foveation to go undetected, ideally the central region should update within 5 ms. That's unreachable even with top hardware right now.

The MS paper achieved some nice results using a 120 Hz screen and a very fast tracker, but the rift is far from that.

It's interesting, but comes with all kind of issues aside from delay (periphery degradation strategy). I've been trying to adapt shadow mappin to foveation and latency is killing it. Especially since shadows are high contrast information, quality popping and self shadow flicks are so distracting.


Yeah. I'm not involved in any kind of dev work with that sort of thing, so you'd probably know better than I would, but it definitely seems like it's a problem that's very much in the research phase.

I can imagine the popping would be that much more painful in a VR environment. Game stutters that my brain largely tunes out on a normal monitor become world rocking cataclysmic events in the rift.


So, do path-based rendering (which gets the nice, slow pop-in of noise, right?) and bias the sampling to favor the current point of focus.

You naturally end up with fewer samples in places that don't matter, and faster fill-in where they do.


Would dual GPUs help the problem?

You could have one low-resolution GPU (such as an onboard Intel HD) and another focusing solely on rendering the foevated patch.

5ms per 320x240 section might not be implausible?


The problem isn't solely in rendering fast enough--it's end-to-end latency. Keep in mind that if your eye is being tracked by a 120Hz camera, that's 1/120 =~ 8.3ms latency just from inter-frame time alone. Add to that all sorts of communication bus overhead, image processing the eye tracking imagery to figure out eye direction, potential buffering in the display adapter, 5ms is not really possible, yet.

That said, I don't think there's theoretical limitations that prevent extremely low latency systems from happening, it's just not something we've kept in mind when designing modern consumer hardware.


Would you be able to throw hardware at it? You could render in high-detail not just the area that your eye can see well, but also the area around it which it could be pointing at if it moved during your measurement latency. (Can your eye look anywhere in your image within 20ms? I honestly have no idea.)

Yikes, that makes sense. Thank you for spelling it out a bit! :)

Yeah, I'm not an expert in this area, but my casual reading seems to make it out to be similar to the problems you have with a lot of VR stuff, where the hardware just hasn't needed to be ultra-low-latency before now, but that there aren't necessarily hard limits that would prevent it.

If they ever get it ironed out, it seems like the kind of thing that could offer pretty huge advantages for gaming, if just one person is looking at the screen (I have no idea if anyone's considering multiple sets of eyes to track or if that introduces particularly novel problems that wouldn't be covered in solving the single set of eyes)


"Next-gen" is a bit misleading of a description: the GPU (GTX 670) that this scene is rendered in real-time on has about 30% more power than the GPU in the PS4, although it was released 2 years ago. The CPU used to bake the lightmaps has more than twice the power of the PS4's CPU.

It's not next-gen hardware, it's next-gen lighting. It doesn't matter how powerful your GPU is if no one has written a good lighting algorithm for it.

It doesn't matter how much time the CPU takes to bake the lightmaps since it's done offline.

These videos reminded me of the architectural renders of "The Third & The Seventh" by Alex Roman

http://vimeo.com/7809605


Both feature the Barcelona pavilion.

It was the German Pavilion for the 1929 International Exposition in Barcelona. https://en.wikipedia.org/wiki/Barcelona_Pavilion


Thats very impressive. I wonder what is rendered and what is actually filmed.

There is a compositing breakdown:

http://vimeo.com/8200251

You'll probably find there is a lot more rendered than you expected.


The one thing that is still hard is doing the radiosity. The way that I spot CG stuff that can be rendered real time is that there simply are not enough rad bounces and objects without direct lighting in the scene end up being too dark. Baking lightmaps takes a long time, and realtime radiosity methods are still CPU bound.

Sooner or later you will be making this statement about a piece of actual RL video footage that just happens to have a bad exposure/gamma/colour balance :)

Legal | privacy