Jump to content

  • Log In with Google      Sign In   
  • Create Account


CryENGINE 3 Irradiance Volumes vs Frostbite 2 Tiled Lighting


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
15 replies to this topic

#1 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 06 October 2012 - 09:21 AM

Hello,I have managed to implement a technique similar to the one used in battlefield 3(deferred shading with tiled light culling on the compute shader).I however couldn't find any info on the so called Irradiance Volumes.What exactly are they?Are they more efficient than the method used in Battlefield 3?With all the limitations of deferred shading,the method I'm currently using gets really annoying to work with.Wouldn't it be way better to just have a forward renderer that just renders once and in the pixel shader it just loops all lights for each pixel and calculates it's illumination.Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 27677

Like
0Likes
Like

Posted 06 October 2012 - 09:44 AM

Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

On older hardware, you would have a shader that only calculated a single light. Then for each light, you would draw the object using that same shader (and additively blend the 2nd and onwards lights).
After this, people optimized this technique to only require a single pass by having n different shaders that worked for n different lights. If an object was lit by 5 lights, you'd use the 'Forward5Lights' shader.
Today, you can load thousands of lights into a cbuffer/texture/etc, and then use dynamic branching in your shader to loop through each object's required lights (so we're back to one shader and one pass).

#3 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 06 October 2012 - 09:45 AM

Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?

On older hardware, you would have a shader that only calculated a single light. Then for each light, you would draw the object using that same shader (and additively blend the 2nd and onwards lights).
After this, people optimized this technique to only require a single pass by having n different shaders that worked for n different lights. If an object was lit by 5 lights, you'd use the 'Forward5Lights' shader.
Today, you can load thousands of lights into a cbuffer/texture/etc, and then use dynamic branching in your shader to loop through each object's required lights (so we're back to one shader and one pass).


does this mean that deferred shading is obsolete in DirectX11 now?
EDIT:what I mean is - if it was so easy,why do modern game engines use complex GBuffer techniques?

#4 Hodgman   Moderators   -  Reputation: 27677

Like
0Likes
Like

Posted 06 October 2012 - 09:56 AM

No, deferred shading is still very efficient when you've got high light counts -- especially tiled variants.
However, there's still people working on new variants of forward shading -- it started losing in popularity to deferred, but never truly became obsolete itself. Recently people have taken the ideas from tiled deferred shading and created tiled forward shading too. Different scenes will perform differently on different kinds of rendering pipelines. The best shading pipeline design will change from game to game. Also, it's not as black and white as it used to be. Many games are somewhere in-between forward and deferred.

Edited by Hodgman, 06 October 2012 - 09:57 AM.


#5 Chris_F   Members   -  Reputation: 1940

Like
0Likes
Like

Posted 06 October 2012 - 11:17 AM

An alternative to 2D tiled deferred and forward shading is Clustered Deferred and Forward Shading.

#6 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 06 October 2012 - 12:29 PM

Hello,I have managed to implement a technique similar to the one used in battlefield 3(deferred shading with tiled light culling on the compute shader).I however couldn't find any info on the so called Irradiance Volumes.What exactly are they?Are they more efficient than the method used in Battlefield 3?With all the limitations of deferred shading,the method I'm currently using gets really annoying to work with.Wouldn't it be way better to just have a forward renderer that just renders once and in the pixel shader it just loops all lights for each pixel and calculates it's illumination.Why do people say the geometry has to get ''re-drawn'' for each light in the forward rendering approach?


Hey, I think you're talking about this technique:
http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/

There's live demo of it, it's pretty damn fast and produces nice results. The geometry that reflects indirect light has to be static, but there can be dynamic objects too.

+ I have a question.
You said:

I have managed to implement a technique similar to the one used in battlefield 3


Are you talking about the global illumination from geomerics' enlighten?
If so, can you tell me some more informations about this technique?

#7 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 07 October 2012 - 08:26 AM

Hey, I think you're talking about this technique:
http://codeflow.org/...diance-volumes/

There's live demo of it, it's pretty damn fast and produces nice results. The geometry that reflects indirect light has to be static, but there can be dynamic objects too.

+ I have a question.
You said:


I have managed to implement a technique similar to the one used in battlefield 3


Are you talking about the global illumination from geomerics' enlighten?
If so, can you tell me some more informations about this technique?



No,I was talking that I implemented the Tiled Deferred one after I saw how it works in that sample code.

An alternative to 2D tiled deferred and forward shading is Clustered Deferred and Forward Shading.


So this is like an advanced version of the tiled deferred shading method?Is there a source with more info on this technique?

#8 Pottuvoi   Members   -  Reputation: 240

Like
0Likes
Like

Posted 07 October 2012 - 11:54 PM

So this is like an advanced version of the tiled deferred shading method?Is there a source with more info on this technique?

The link had paper about the tech.
http://www.cse.chalm...ustered_shading

They also had new paper about tiled forward shading at siggraph, but there is only pre-print available.
http://www.cse.chalm...ed_forward_talk
It focuses on the ability to light transparent surfaces

CryEngine3 Light Propagation Volume.
http://www.crytek.co...ion_Volumes.pdf

Edited by Pottuvoi, 08 October 2012 - 01:20 AM.


#9 Lightness1024   Members   -  Reputation: 695

Like
0Likes
Like

Posted 08 October 2012 - 03:18 AM

Irradiance volumes (like Tatarchuk's presentation "Irradiance Volumes For Games"), is merely a light probe's technique like the post on codeflow.org previously linked.
Crytek mention irradiance is used in conjunction with LPV. Don't confuse the both methods because they are radically different. In their first Siggraph presentation Crytek mention that they are making the two work together because it is a bad idea to inject sky radiance into the LPV. First, because you would need 55 steps of propagation to make the radiance flows all the way down, second because it will disrupt the flux of other lights because of the poor 2 bands SH, and because it will interfere with itself since sky radiance is hemispherical and has to be injected from 5 faces in the LPV, thridly because a volume full of flux makes the limits very noticeable, lastly because this technique is full of leaks, it is better to keep the flux to a minimum.
So they are using classic dome irradiance for contribution of the sky. They don't say how they solve occlusion from the sky's radiance, and I think they do not. That is why it is nowhere mentioned in their second paper.
Battlefield uses enlightment, which is a radically different approach that has an 'offline' cost, and is less dynamic, also handles occlusions with more difficulty. But LPV is not the holy grail at all in that domain either. The advantage would be the very light cost. A better technique for geometric distant AO would be cone tracing but it costs too much memory and requires heavy shader model 5 code.

#10 MJP   Moderators   -  Reputation: 10235

Like
0Likes
Like

Posted 08 October 2012 - 02:51 PM

Are you talking about the global illumination from geomerics' enlighten?
If so, can you tell me some more informations about this technique?


It's just radiosity performed at run-time, with some of their own particular optimizations. If you read up on classic radiosity techniques you should be able to get a rough idea as to what they're doing. They also have some public presentations on their website.

By the way, I'm pretty sure that BF3 doesn't even use Enlighten at runtime. They don't appear to have any dynamic GI, so they probably just use Enlighten to (very quickly) pre-bake GI for their lightmaps.

#11 TheChubu   Crossbones+   -  Reputation: 3705

Like
0Likes
Like

Posted 08 October 2012 - 06:55 PM

By the way, I'm pretty sure that BF3 doesn't even use Enlighten at runtime. They don't appear to have any dynamic GI, so they probably just use Enlighten to (very quickly) pre-bake GI for their lightmaps.

This. That's why you don't see, for example, a dynamic day and night cycle on BF3. Its all pre-baked and tuned up per-scene by artists.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#12 Frenetic Pony   Members   -  Reputation: 1187

Like
0Likes
Like

Posted 08 October 2012 - 09:29 PM

Battlefield uses enlightment, which is a radically different approach that has an 'offline' cost, and is less dynamic, also handles occlusions with more difficulty. But LPV is not the holy grail at all in that domain either. The advantage would be the very light cost. A better technique for geometric distant AO would be cone tracing but it costs too much memory and requires heavy shader model 5 code.


Voxel Cone tracing can be handled for newer hardware, and besides there's a lot of optimization room left. 3d compressed textures could help a lot with the memory problem. Speaking of voxel cone tracing, it can also handle direct lighting well in enough if you're using it already. Not only does it handle occlusion but you're also getting a full on image based lighting solution if you want.

The real problems with voxel cone tracing are other things, constant re-rasterization (though again, pre-computed voxels in a 3d texure could help a lot), thin geometry does not work well and how you'd handle things like grass and trees and stuff still needs to be worked out, not to mention the cone tracing itself is fairly expensive.

As for Battlefield, the original plan (As far as I know) was for dynamic GI. Unfortunately Enlighten's solution for dynamic stuff isn't terribly great, nor obviously nearly fast enough to run on the 360/PS3. Eventually they went down to PC only, and then just dropped it altogether.

Edited by Frenetic Pony, 08 October 2012 - 09:30 PM.


#13 Lightness1024   Members   -  Reputation: 695

Like
0Likes
Like

Posted 09 October 2012 - 02:30 AM

It is true that cone tracing has an open road future of improvements and derivations. We are already seeing people on this forum trying to implement a version in a regular grid (cabaleira). Of course, remains the issue of thin objects (maybe solvable by tesselating/cutting objects ?) and light leaks, but appart from view-bound solutions like path tracing, I do not know of any GI technique that doesn't have difficulties with leaks. Final gather is full of those, empirical tricks have to be used to clip photon samples and other horrors to avoid leaks, in LPV Crytek is using an empirical 'central difference' damping which doesn't work perfectly because it darkens area that are receiver on one's side and emitter on the other side (black hole effect). I wonder what kind of artefacts cone tracing will give, seeing from the last scene in the video (plates and trees in some kind of atrium restaurant), I see horrible dark sploshs on the arcades and near the ceilings.
And other older techniques are just only worse:
- pure SSGI (RSM) : not even usable with large discs, occlusions ?
- radiance hints : horribly low information
- instant radiosity : mentioned above (BF3)
- sparse voxel GI : same issues as LPV (noise in occlusions etc..)
- imperfect shadow maps : horrible to implement, dirty result, slow...

fortunately with all the room left for improvement researchers won't be out of a job before long.

#14 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 09 October 2012 - 10:43 AM


Are you talking about the global illumination from geomerics' enlighten?
If so, can you tell me some more informations about this technique?


It's just radiosity performed at run-time


Radiosity
.
.
in realtime
.
.
.
just

JUST? It's a freaking radiosity performed in realtime, it's too awesome to be "just radiosity".
I'm so curious how do they calculate such beautiful GI in realtime...

#15 Lightness1024   Members   -  Reputation: 695

Like
0Likes
Like

Posted 09 October 2012 - 03:32 PM

Well I think MJP was not trying to say "just" in this sense. It was in the sense that "there is nothing more to it", the method already existed, so the steps are known. That kind of "just". Basically, if you know "radiosity", you don't need to read the "instant radiosity" paper. Roughly. but well...

#16 MJP   Moderators   -  Reputation: 10235

Like
2Likes
Like

Posted 09 October 2012 - 06:40 PM

What I meant that what they are doing anything that's radically new at a fundamental level...radiosity has been around for a very long time and there has been a lot of research devoted to optimizing it. Most of what Enlighten offers is a framework for processing scenes and handling the data in a way that's optimized for their algorithms. I'm sure they've devoted a lot of time to optimizing the solving of the radiosity, but I don't that's really necessary for understanding what they're doing at a higher level.

What they're doing isn't magic...they're techniques only work on static geometry so a lot of the heavy lifting can be performed in a pre-process. They also require you to work with proxy geometry with a limited number of patches, which limits quality. They also limit the influence of patches to zones, and only update a certain number of zones at a time (you can see this if you've ever seen a video of their algorithm where the lighting changes quickly).

I don't to sound like I'm trivializing their tech or saying it's "bad" in any way (I'm actually a big fan of their work), my point was just that their techniques stem from an area of graphics that's been around for a long time and is well-documented.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS