Sign in to follow this  

Global illumination vs Real-time

This topic is 3580 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all ! My current project is about studying methods and algorithm for real-time global illumination. The main goal is to adapt theory from actual research and integration into next-gen video game engine. So I have started to read a lot of research papers, thesis, publications, but none can be really applied to real-time or fulfill video games requirements. For now, the best I have is the PRT/SH stuff and also irradiance volume. But before adapting and developping this, I would like to know if any of you have already try to implement such kind of methods before. I also know next-gen games already used advanced lighting technics, like Halo, Crysis or Unreal3, or even middleware like Lightsprint, Fantasylab or Geometrics... but there are no way to know how they did... Thanks for those who have already studied / implemented such stuff for their tips and tricks.

Share this post


Link to post
Share on other sites
It depends on your requirement list. If you have a time of day feature everything needs to be dynamic. If you do not have that you can store lots of data in textures.
Assuming next-gen games all have a time of day feature, then you can say that everyone is using a hack here and there but no one used a generic GI solution in a shipped game so far.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shirakana2
I also know next-gen games already used advanced lighting technics, like Halo, Crysis or Unreal3, or even middleware like Lightsprint, Fantasylab or Geometrics... but there are no way to know how they did...


There's some information about how Lightsprint works here if that's any help.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shirakana2
I also know next-gen games already used advanced lighting technics, like Halo, Crysis or Unreal3, or even middleware like Lightsprint, Fantasylab or Geometrics... but there are no way to know how they did...


Michael Bunnell of Fantasy Lab wrote two chapters in GPU Gems 2 that describe his algorithms for displacement mapping and real-time lighting. The lighting one is available from the NVIDIA site.

Share this post


Link to post
Share on other sites
Yup, I already read this. It covers AO (which for me is not global illumination) combined with 2-bounces indirect lighting and have good-looking result. But in Fantasy Lab page, he claims that he doesn't use AO, nor PRT/SH, nor radiosity method o_O.

Share this post


Link to post
Share on other sites
The method described in the chapter is not strictly AO - since you can add as many bounces as you want it's more like a GI solution. I would be shocked if fantasy labs is not using either exactly this technique or a modification of it.

If I were working on a real-time GI system, this is the technique I would use (at least as a starting point).

Share this post


Link to post
Share on other sites
There's quite a lot of information on Crysis' lighting model in the paper they presented at Siggraph 2007 - if you have access to the Siggraph proceedings or an ACM Digital Library account you should be able to find it. There's also a fair bit of detail on how Valve does things in the Source engine at http://www.valvesoftware.com/publications.html. Neither of them are doing full real-time global illumination - Half Life 2 uses a lot of pre-baked radiosity data and Crysis uses a combination of tricks to approximate some GI effects.

Share this post


Link to post
Share on other sites
Have you read Simulating Photon Mapping for Real-time Applications? There are some papers about doing Photon Mapping completely on the GPU.
Then there is Caustics Mapping: An Image-space Technique for Real-time Caustics which uses a similar approach as the paper above.
For Subsurface Scattering there is Real Time Subsurface Scattering in Image Space (Same author(s) as previous paper).
If you only need diffuse lighting, then "Instant Radiosity" might work well (add Deferred Shading for more speed).
You already discovered PRT/SH-lighting based methods, which can include SSS, indirect lighting and stuff.

And you could use plain old Lightmaps, generated by Photon Mapping (e.g. from q3map2 Map Compiler). This is valid real-time Global Illumination :-D

You did not list any requirements ("I want to do real-time GI" isn't any requirement at all) and Global Illumination is a very broad field...so find out your requirements and maybe we can help you in a better way.

Share this post


Link to post
Share on other sites
I posted alot of questions about this the last months. Maybe its worth searching for them.

Anyway, I'm looking for a (Fast) realtime ambient / indirect lighting method as well. So far I produced a system that collects light from 6 directions in nodes (placed manually, typically nearby vertices). You can see this node as a cubeMap with 1x1 sized faces. Each face is a color, coming from a direction. But to avoid a lot of texture switching, all cubeMaps are placed in a large 3D texture.

Each vertex is connected to a node (so you can do blending between 3 nodes on a polygon). The vertex shader reads the 6 colors (from the 3D texture) that belong to that node. These are passed to the fragment shader. The fragment shader calculates a world normal (eventually with a normalMap). This normal is used to blend between the colors.


It works on my GeForce 8800. Rendering the world with these nodes is fast, no problem. Only updating those damn nodes is more tricky. Each node has to render the surrounding world 6 times (like a cubeMap). And the 64x64 rendering needs to be scaled down to 1x1 (Actually its 2x2 in my case). That takes time. I can update ~20 nodes per frame before it really gets slow. But probably it will be less nodes when the world to render around the node is getting more complex.

So maybe I can update 4 nodes in the end (and still do all the other stuff without problems). However, most nodes don't need to be updated all the time. And if I get 40 frames per second, I can still update 4 x 40 = 160 nodes in a second. Not that bad, is it?


However, there's another problem. The normal I calculate in the fragment shader is used to blend between the colors. Basically its the same as pointing to a pixel in a cubeMap, depending on a reflection vector. That means I only get the indirect lighting coming straight to the pixel. In reality, a piece of surface will also gather light coming from a larger angle (Lambert Cosine Law stuff). Maybe I can "simulate" this by taking multiple samples around the normal. But I don't know how to generate x new vectors, 45 degrees bended from the original normal.



In the end, I don't know if its worth. Of course, it's dynamic. And if you add something like SSAO, you also get somewhat smaller details in it. But it costs alot, and a pre-calculated lightMap simply looks better for now. There is 1 little advantage though, a traditional lightMap can't be used with a normalMap, unless you do something like they did in Halflife2. The method I described above can do normalMapping without a problem. In fact, it's recommended to that, since it will make the result look more varied (otherwise you get a "per vertex lighting" look).

Don't know if this is a good method, but I hope the info was usefull.
Succes!
Rick

Share this post


Link to post
Share on other sites
Hope this helps....

I use cubemap lighting for objects in a demo i'm working on, and to get round the problem of taking multiple samples from a cubemap, I wrote this function...



vec3 _getPCFCubeMap( samplerCube sampler, vec3 normal, float scale ){

mat3 matrix_rotation;
vec3 rotated_normal;
vec3 a = vec3(0.0);

float pcf_step = (90.0/16.0)*scale;
pcf_step+= pcf_step*0.5;
// convert to radians
scale=pcf_step*0.01745329252f;;

float angle[7];
angle[0] = -3.0*scale;
angle[1] = -2.0*scale;
angle[2] = 1.0*scale;
angle[3] = 0.0*scale;
angle[4] = 1.0*scale;
angle[5] = 2.0*scale;
angle[6] = 3.0*scale;

for(int j=0;j<3;++j){

for(int i=0;i<7;++i){

if(j==0){ // rotation around z
matrix_rotation[0] = vec3(cos(angle[i]),-sin(angle[i]),0);
matrix_rotation[1] = vec3(sin(angle[i]),cos(angle[i]),0);
matrix_rotation[2] = vec3(0,0,1);
vec3 rotated_normal = matrix_rotation*normal;
a += textureCube(sampler,rotated_normal).rgb*dot(rotated_normal, normal);
}

if(j==1){ // rotation around y
matrix_rotation[0] = vec3(cos(angle[i]),0,sin(angle[i]));
matrix_rotation[1] = vec3(0,1,0);
matrix_rotation[2] = vec3(-sin(angle[i]),0,cos(angle[i]));
vec3 rotated_normal = matrix_rotation*normal;
a += textureCube(sampler,rotated_normal).rgb*dot(rotated_normal, normal);
}
if(j==2){ // rotation around x
matrix_rotation[0] = vec3(1,0,0);
matrix_rotation[1] = vec3(0,cos(angle[i]),-sin(angle[i]));
matrix_rotation[2] = vec3(0,sin(angle[i]),cos(angle[i]));
vec3 rotated_normal = matrix_rotation*normal;
a += textureCube(sampler,rotated_normal).rgb*dot(rotated_normal, normal);
}

}
}
a/=21;

return a;
}





I'm rotating the normal by a given degree when doing the pcf sampling on the cube map. Hope that helps.



Share this post


Link to post
Share on other sites
He, thanks! I don't want to high-jack this topic, so I keep it short. I'll try out as soon as I get a chance. If I understand you right, you take 8 samples around your original normal. Where can I see how big the angle is between the original normal, and the one? In my case its needs to be 45 degrees or something like that (looking at the normal from a side view). From a top view, there are 4 vectors surrounding the normal. So there must be 90 degrees between each. Does it work for normals in all directions?

Thanks,
Rick

Share this post


Link to post
Share on other sites
Thanks for infos.

You're right Enrico, I should have first started to list requirements, but even me I haven't defined them yet. Actually I have to discuss it first with the person in charge with this project. Anyway I can just extrapolate some from my own experience as a gamer.

So the method will have to handle GI:
- for distant and local lighting (maybe local is more important for games)
- for dynamic lights
- for dynamic view (obviously)
- for dynamic objects
- for diffuse and glossy objects
- for animated (or deformable) objects
and performance requirements are at least 30fps for complex scene.

(Such a method may be just a dream...)

I think SSS and caustics are a bit out of scope for video games, as self-shadowing on animated character. Another thing is that I do not deal with direct lighting but only with indirect illumination.

----

spek => I have started of thinking about a similar method like sampling space with cube map and updating them dynamically, please keep me in touch with your progress.

Lopez => thanks for your code, I haven't read it carefully yet but it seems you forget a minus sign for affectation of angle[2]

Share this post


Link to post
Share on other sites
Quote:
Original post by Shirakana2
- for distant and local lighting (maybe local is more important for games)
- for dynamic lights
- for dynamic view (obviously)
- for dynamic objects
- for diffuse and glossy objects
- for animated (or deformable) objects
and performance requirements are at least 30fps for complex scene.


Skip the glossy part, you simply won't do that in realtime. If you can live without that, and depending on what target hardware those 30fps are on, the formerly linked article from GPU Gems will work just fine, and give you fairly correct AO and indirect lighting in 4-6 passes with fairly simple shaders.

Share this post


Link to post
Share on other sites
I use a variant of the method described in Real-Time Global Illumination on GPU. It's similiar to Speks method, but I project the cube maps to SH coefficients that are stored in a volume texture used in the lighting pass. Same idea as in the irradiance volume papers.

Rendering with the volume is fast. The method is bottlenecked by the cube map rendering required to update the volume. Like Spek, I spread out the updates over a number of frames. For my application this method works well (relatively small indoor spaces).

Combined with a dynamic AO method like SSAO it looks quite nice, but still not as good as pre-baked light maps. I have some screenshots on my blog: http://risbrandt.blogspot.com.

I also think that the method in Michael Bunnells article is promising. I've just not found the time to implement it.

Good luck and keep us posted on your progress:)

Share this post


Link to post
Share on other sites
It depends.

When the scene remains static and I move the camera I get 30 - 100 fps. Here I'm bottlenecked by the SSAO. It's slow for example when I'm very close to a wall since the sampling pattern gets larger in screen space and thashing the texture cache.

The method supports moving lights and objects. I get 20 - 30 fps for dynamic scenes. Here the method is bottlenecked by cube map rendering. It seems like I am CPU limited here, since there is no difference in fps on a scene with 10k or 500k vertices and also no noticeable difference if I render cube maps of size 16, 32, or 64.

All numbers are taken in 1024x768 on a P4 3.2Ghz, 1GB ram, ATI Radeon HD3870.

What I like about this method is that it scales very well and it's easy to balance (just increase or decrease the number of cube map updates). It's also independent of your rendering system and scene setup. I use deferred rendering but a forward renderer would work fine as well.

I've done no efforts to optimize this, just wanted to get it to work first. Fps suck as a measurement of performance and I will investigate the rendering times and bottlenecks for each part of the pipeline more serious in the near future :)

Update: I've added a few more screenshots on my blog. Same numbers on the new ones.

[Edited by - Dark_Nebula on February 20, 2008 2:53:04 AM]

Share this post


Link to post
Share on other sites
I had an idea some time before about GI and deferred shading.
Basically the idea is to query samples from shadowmap on each scene pixel using some montecarlo technique and calculate reflected light from shadowmap point by transforming that point in to gbuffer texcords and retrieving diffuse/specular properties. You can store normal in shadowmap and effectively cull low contributions. It only allows one step tracing and ignores the possibility of occlusion between screen pixel and shadowmap pixel (which is possible).
It would also be weary resolution dependent ... but scene complexity is irrelevant (as is with differed shading).
Anyway it just a random thought, and it's probably been done before, but it's worth a shot.

Share this post


Link to post
Share on other sites
Quote:
Original post by RedDrake
I had an idea some time before about GI and deferred shading.
Basically the idea is to query samples from shadowmap on each scene pixel using some montecarlo technique and calculate reflected light from shadowmap point by transforming that point in to gbuffer texcords and retrieving diffuse/specular properties. You can store normal in shadowmap and effectively cull low contributions. It only allows one step tracing and ignores the possibility of occlusion between screen pixel and shadowmap pixel (which is possible).
It would also be weary resolution dependent ... but scene complexity is irrelevant (as is with differed shading).
Anyway it just a random thought, and it's probably been done before, but it's worth a shot.


This will not work to good since geometry outside the shadow map frustum wouldn't contribute to the final image.
You could of course increase the size of shadow map frustum to avoid some problems but it'll never work ok.
Funny idea though, if you just allow for local color bleeding (limited by a small radie) it migh work ok.

Share this post


Link to post
Share on other sites
Dark_Nebula> why do you actually render cubemaps? if your bottleneck is having 6 renders for each cubemap, why don't you try dual paraboloid maps? agrgeed, for things like reflections, they aren't so good, but considering the very diffuse nature of the interreflections, chances are the distortion DPMs introduce will be close to unnoticeable...
it might be more expensive to sample them too, but as you're bound by cubemap rendering, and not shaders, it might be worth a try :)

also, have you tried your method on larger scenes? as you only update a few cubemaps per frame, doesn't it induce some sort of lag when you move objects/lights around? a lag that would be more and more apparent as the scene complexity and the number of cubemaps to update increases?

Share this post


Link to post
Share on other sites
Quote:
why do you actually render cubemaps? if your bottleneck is having 6 renders for each cubemap, why don't you try dual paraboloid maps?
I've just not had the time to try it. It's on my todo-list:)

Quote:
also, have you tried your method on larger scenes? as you only update a few cubemaps per frame, doesn't it induce some sort of lag when you move objects/lights around?
No, I've just tried it on small scenes (with regard to spatial extent). There is some noticable lag, especially when there is sudden changes in lighting. I've experimented with different ways to interpolate irradiance volumes to hide this, but it's not possible to remove completely without updating most light probes every frame. This will definitely be more apparent on larger scenes since there will be more probes. So it's likely that this method will not work satisfactory for scenes with wide open spaces. On the other hand there might be ways to place the probes to cover a larger volume with less probes.
For my application (think interior of flats, houses etc.) it's good enough, but there is definitely room for improvements.

Share this post


Link to post
Share on other sites
Quote:
I've just not had the time to try it. It's on my todo-list:)

ah yes, I missed the part "I've done no efforts to optimize this, just wanted to get it to work first." in your previous posts.

ok for the rest :)

keep us updated with your progress!

Share this post


Link to post
Share on other sites

This topic is 3580 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this