It seems to me that this game doesn't actually take in account all the seemingly existing light sources when computing the shadows: see for instance at around 7:45, there is an obvious light source in the bottom right region (actually a bunch of synchronized pulsating lights on some sort of cylinders) which clearly should affect the surfaces close to it, but it doesn't in either mode! In the next comparison, there's a fire burning close to a wall, and likewise the fire doesn't seem to emit any light on the wall. Actually, the commentators do mention that there isn't that much dynamic lights handled in the game, limiting the effect of RTX.
I don't know how that RTX tech is working, but from what I've seen of the port of Quake2, it looked much more convincing - did you see it before making that judgement?
A lot of valuable work on Blender side, but the main goal is questionable, and author explains why.
Pre-calculated lighting had very little to do with physical correctness, it was purely an artistic placement of light sources in a specific map processed with a specific tool (official or third party) with specific defaults. Two adjacent rooms could be lit in a very different manner because map maker decided it looked good; two maps coming from different sources could not be expected to share any common lighting principles. Quirks like overbright values were not even consistent among officially supported renderers, and were inherited by later Quake and Source engines (which would add their own quirks and light processing options). To put it shortly, there was no reference for the final result except the final result itself, and it often was unstable (changing textures, geometry, or processing constants would shift the look too much, and require fixes).
To make the game look as intended, you have to somehow rely on original lightmaps that are tightly coupled with original low resolution geometry and textures. Given that people still argue which of the original renderers gives the “true” picture, I have my doubts about making a new one that pretends some average light model is good enough for everything. Even for episode 1, hand-placed lights had to be reintroduced into the system, and ad-hoc fixes had to be done, but manual fixes are not an option for all the existing maps.
On the other hand, the main thing I notice in the Diablo 2 shots (having had my attention drawn to the rendering -- I'm not sure what the article means by "pay attention, how the pales (?) don't cover the same floor-pixels all the time") is how the torch appears to be casting shadows perpendicular to itself. And how as you walk by the torch, the direction of your shadow doesn't change at all.
actually a lot of games still use direct lighting in the same style as Doom 3 but with an added ambient term (like Quake 4) and using shadow maps because somehow (I don't agree) the artefacts of shadow maps are considered more tolerable than the hardness of stencil shadows. a lot will only allow a single shadowing light source as well...
there is an argument that high quality shadow maps can encode the penumbra, but current gen console hardware is miles away from that quality (top end PC hardware is not).
nowadays with stencil shadows there are performance arguments - we would rather use our fill rate on HD resolutions, deferred passes and post effects. there is also the potential patent issue (is that still a thing? its embarrassing really for such a trivial algorithm :( )
Not sure what can you be skeptical about. The difference from RT Quake to Quake it’s only that the lights are not precompiled at the beginning of the level. Any other game just need to rework the light baking step to use RTX and job done.
The only downside that can be pointed out is that you are hardware dependent. Would be amazing to see thing in a console, all games with dynamic lighting would help developer and gamers.
There's a difference between "some voxel engines can have dynamic lighting" and "dynamic lighting works with this particular engine".
It's not like this is some obscure or useless feature that nobody has ever heard of. In fact for all the "detail" one can't help but notice the number of other things missing from the video.
We're the shadows actually rendered in real time or were they prerendered and baked into the textures along with the environment lighting? Genuinely curious.
I could be wrong but, AFAIK, Doom 3 was the first game with real time shadows and dinamic lighting and every game before that employed various tricks that simulated those instead.
I think the big gain came from "ReSTIR". The previous games released with RT "remasters" like Quake 2 have only a few lights that need to be traced per frame. ReSTIR apparently can optimize how rays are traced and allows for far more light sources to be visible.
This is where the performance gain for the 100s of lights Cyberpunk might have in a single frame comes from.
That's pretty old 3D rendering tricks causing those overall though. Like I fail to see recent games doing those at all.
Overall, shadows aren't the hard part nowadays, it's light that is. Global illumination and light bouncing back around that is. Though shadows ARE very expensive still.
The main advantage of new technology is interactivity. Dynamic illumination is _dynamic_ instead of statically baked. That has obvious environmental advantages. Comments like yours that pixel peep screenshots miss the point.
These limitations seem imperceptible to you, but are very real in terms of an artists ability to execute on their vision. Most games do not currently have multiple dynamic light sources in a scene. You can go back and play Fear on PC to see that and it's awesome - Doom 3 as well.
But graphics tech optimized away from that direction (rightfully), and now we need to account for dynamic illumination and transparency. Better shadows (especially in interactive moments) is a nice bonus.
That's a significant limitation for a modern technique.
Here's the full algorithm for anyone curious:
> The Forward Renderer works by culling lights and Reflection Captures to a frustum-space grid. Each pixel in the forward pass then iterates over the lights and Reflection Captures affecting it, sharing the material with them. Dynamic Shadows for Stationary Lights are computed beforehand and packed into channels of a screen-space shadow mask, leveraging the existing limit of 4 overlapping Stationary Lights.
Near the end they claim to have added more detailed shadows to their engine just after making this demo. I'm not sure if they meant 'dynamic shadows' by that.
Yes the blog post explains that this is how the game works. Some lights do make shadows, but they are expensive so artists may only place a few per sceen. There seem to be none which could make him cast any. Also note how his face is all red even though that red is way further behind in the periphery.
This was pretty cool, but I couldn't help being struck by how utterly poorly it performs, especially as soon as you add just a handful of objects and two light sources. Sure, it's written in JS, but JS has come a long way in terms of performance, and so have our computers.
This demonstration instantly makes me think of Blizzard's Diablo II from 2000, which features an identical effect during gameplay, with a seemingly infinite number of shadow-generating obstacles and as many light sources casting shadows as there are players on the screen, and it runs perfectly smooth even on a 500mhz P3.
I don't know how that RTX tech is working, but from what I've seen of the port of Quake2, it looked much more convincing - did you see it before making that judgement?
reply