Global illumination vs Real-time

Started by
22 comments, last by momotte 16 years, 1 month ago
Hope this helps....

I use cubemap lighting for objects in a demo i'm working on, and to get round the problem of taking multiple samples from a cubemap, I wrote this function...

vec3 _getPCFCubeMap( samplerCube sampler, vec3 normal, float scale ){	mat3	matrix_rotation;	vec3 	rotated_normal;	vec3 	a				=	vec3(0.0);	float pcf_step	=	(90.0/16.0)*scale;	pcf_step+=	pcf_step*0.5;	//	convert to radians	scale=pcf_step*0.01745329252f;;	float angle[7];	angle[0]	=	-3.0*scale;	angle[1]	=	-2.0*scale;	angle[2]	=	  1.0*scale;	angle[3]	=    0.0*scale;	angle[4]	=	 1.0*scale;	angle[5]	=	 2.0*scale;	angle[6]	=	 3.0*scale;	for(int j=0;j<3;++j){		for(int i=0;i<7;++i){						if(j==0){					//	rotation around z				matrix_rotation[0]	=	vec3(cos(angle),-sin(angle),0);				matrix_rotation[1]	=	vec3(sin(angle),cos(angle),0);				matrix_rotation[2]	=	vec3(0,0,1);				vec3 rotated_normal	=	matrix_rotation*normal;				a 	+= 	textureCube(sampler,rotated_normal).rgb*dot(rotated_normal, normal);			}						if(j==1){					//	rotation around y				matrix_rotation[0]	=	vec3(cos(angle),0,sin(angle));				matrix_rotation[1]	=	vec3(0,1,0);				matrix_rotation[2]	=	vec3(-sin(angle),0,cos(angle));				vec3 rotated_normal	=	matrix_rotation*normal;				a 	+= 	textureCube(sampler,rotated_normal).rgb*dot(rotated_normal, normal);			}			if(j==2){					//	rotation around x				matrix_rotation[0]	=	vec3(1,0,0);				matrix_rotation[1]	=	vec3(0,cos(angle),-sin(angle));				matrix_rotation[2]	=	vec3(0,sin(angle),cos(angle));				vec3 rotated_normal	=	matrix_rotation*normal;				a 	+= 	textureCube(sampler,rotated_normal).rgb*dot(rotated_normal, normal);			}				}	}	a/=21;	return a;}


I'm rotating the normal by a given degree when doing the pcf sampling on the cube map. Hope that helps.



Advertisement
He, thanks! I don't want to high-jack this topic, so I keep it short. I'll try out as soon as I get a chance. If I understand you right, you take 8 samples around your original normal. Where can I see how big the angle is between the original normal, and the one? In my case its needs to be 45 degrees or something like that (looking at the normal from a side view). From a top view, there are 4 vectors surrounding the normal. So there must be 90 degrees between each. Does it work for normals in all directions?

Thanks,
Rick
Thanks for infos.

You're right Enrico, I should have first started to list requirements, but even me I haven't defined them yet. Actually I have to discuss it first with the person in charge with this project. Anyway I can just extrapolate some from my own experience as a gamer.

So the method will have to handle GI:
- for distant and local lighting (maybe local is more important for games)
- for dynamic lights
- for dynamic view (obviously)
- for dynamic objects
- for diffuse and glossy objects
- for animated (or deformable) objects
and performance requirements are at least 30fps for complex scene.

(Such a method may be just a dream...)

I think SSS and caustics are a bit out of scope for video games, as self-shadowing on animated character. Another thing is that I do not deal with direct lighting but only with indirect illumination.

----

spek => I have started of thinking about a similar method like sampling space with cube map and updating them dynamically, please keep me in touch with your progress.

Lopez => thanks for your code, I haven't read it carefully yet but it seems you forget a minus sign for affectation of angle[2]
Some of my previous work on my personal webpage
Quote:Original post by Shirakana2
(Such a method may be just a dream...)


There's no 'may' about it. You're going to have to compromise on some of those requirements.

Game Programming Blog: www.mattnewport.com/blog

Quote:Original post by Shirakana2
- for distant and local lighting (maybe local is more important for games)
- for dynamic lights
- for dynamic view (obviously)
- for dynamic objects
- for diffuse and glossy objects
- for animated (or deformable) objects
and performance requirements are at least 30fps for complex scene.


Skip the glossy part, you simply won't do that in realtime. If you can live without that, and depending on what target hardware those 30fps are on, the formerly linked article from GPU Gems will work just fine, and give you fairly correct AO and indirect lighting in 4-6 passes with fairly simple shaders.
I use a variant of the method described in Real-Time Global Illumination on GPU. It's similiar to Speks method, but I project the cube maps to SH coefficients that are stored in a volume texture used in the lighting pass. Same idea as in the irradiance volume papers.

Rendering with the volume is fast. The method is bottlenecked by the cube map rendering required to update the volume. Like Spek, I spread out the updates over a number of frames. For my application this method works well (relatively small indoor spaces).

Combined with a dynamic AO method like SSAO it looks quite nice, but still not as good as pre-baked light maps. I have some screenshots on my blog: http://risbrandt.blogspot.com.

I also think that the method in Michael Bunnells article is promising. I've just not found the time to implement it.

Good luck and keep us posted on your progress:)
Hey it looks very nice. What about your frame rate ? Does your method handle dynamic object ?

Thanks again
Some of my previous work on my personal webpage
It depends.

When the scene remains static and I move the camera I get 30 - 100 fps. Here I'm bottlenecked by the SSAO. It's slow for example when I'm very close to a wall since the sampling pattern gets larger in screen space and thashing the texture cache.

The method supports moving lights and objects. I get 20 - 30 fps for dynamic scenes. Here the method is bottlenecked by cube map rendering. It seems like I am CPU limited here, since there is no difference in fps on a scene with 10k or 500k vertices and also no noticeable difference if I render cube maps of size 16, 32, or 64.

All numbers are taken in 1024x768 on a P4 3.2Ghz, 1GB ram, ATI Radeon HD3870.

What I like about this method is that it scales very well and it's easy to balance (just increase or decrease the number of cube map updates). It's also independent of your rendering system and scene setup. I use deferred rendering but a forward renderer would work fine as well.

I've done no efforts to optimize this, just wanted to get it to work first. Fps suck as a measurement of performance and I will investigate the rendering times and bottlenecks for each part of the pipeline more serious in the near future :)

Update: I've added a few more screenshots on my blog. Same numbers on the new ones.

[Edited by - Dark_Nebula on February 20, 2008 2:53:04 AM]
Great work ! Congratulations
Some of my previous work on my personal webpage
I had an idea some time before about GI and deferred shading.
Basically the idea is to query samples from shadowmap on each scene pixel using some montecarlo technique and calculate reflected light from shadowmap point by transforming that point in to gbuffer texcords and retrieving diffuse/specular properties. You can store normal in shadowmap and effectively cull low contributions. It only allows one step tracing and ignores the possibility of occlusion between screen pixel and shadowmap pixel (which is possible).
It would also be weary resolution dependent ... but scene complexity is irrelevant (as is with differed shading).
Anyway it just a random thought, and it's probably been done before, but it's worth a shot.

This topic is closed to new replies.

Advertisement