Jump to content
  • Advertisement
Sign in to follow this  

Global illumination raving's

This topic is 2118 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok guys, I'm trying to wrap my head around some global illuminations tricks and I have some questions.


Since I'm planning to use DirectX 9.0c which doesn't support rendering to volume texture I indent to use a 2D unwrapped version ( for example 1024x32) and use it as a render target. So far, so good.

About propagation technique, I have this idea : instead of propagating the light energy through a volume in the shader, I plan to simply render some kind of hemispherical geometry ( probably instanced ) with some kinf of gradient texture applied to it, to simulate light fading to the distance.

I mean, I have this hemispherical mesh and when I generate my VPL's positions, normals and colors, I simply render such a mesh in that position , oriented torwards the normal and using appropriate color. As a render target I'm planning to use the unwrapped texture, mentioned above 1024x32, and camera looking straight down with ortho prpjection to render every vertical slice of the scene to a part of that 1024x32 texture, while offseting the viewport accoringly. In total 32 vertical slices. I should adjust the near and far clip planes in order to render hemispheres that fall in that horozintal slice. I will probably run into issues, because if slices are too thin, and my hemispheres are too large, they will be clipped by the near and far clipping planes and nothing will be renendered even if I disable backface culling. 

How can I cope with this problem  ? Should I fill the hemisperes with "geometry" too. so if clipped by the planes the inside vertices still to be rendered ? Should I study Point based rendering, or there is something neat and easy I'm missing ?

As a result, a should have a 1024x32 texture, which would contain the bounced light of the scene as horizontal slices.

Because I'm using a 2D texture I can't make use of hardware trilinear interpolation as with volume textures. I need to do this on my own, sampling every rendered pixel 18 times for "nearby" texels and averaging them, which seems slow.

Can I instead copy the 2D 1024x32 texture to a volumetexture, and make use of hardware interpolation between adjacent texels ?

Can I do it at onces, not slice by slice..? I hope memory layout of same pixel format 2d and volume textures are the same..

So I can simply lock the volume texture, lock the 2d texture and memcpy the bytes...


Should I take different route to GI instead, under DirectX 9.0 ?


Thanks for your help, in advance.


Share this post

Link to post
Share on other sites

Ok, but under one condition - if the idea is impossible to implement or is not going to work at all, you must tell me and save me from struggling smile.png

I've seen your implementation of GI, so I value your opinion.

Basically, for every Virtual Point Light I have generated I want to render a hemisphere at the light position with the same color, oriented along surface normal. As a render target I want to use a 2d texture and render a vertical slice of this hemispheres around the camera to that texture, and limiting the hemispheres being rendered to that slice by clipping them with near and far clipping planes. So every hemisphere gets rendered to the corresponding slice, based on it's vertical position in the world. So then, when I generate my volume texture from the slices already rendered I can have a volume, with texels colored by those hemispheres. I want to avoid the light propagation part in the volume texture.

Those hemispheres could have a gradient texture applied when rendered, so the further the texel from the center of the initial Virtual Point Light, the smaller the light coming from it  influence it. It could be artist created gradient texture for nice effects.

I fired 3ds max for a quick illustration of what I mean.

The first picture is the overall idea, and then a camera with clipping planes enabled to render the slices, which then could be used to construct the volume texture.

No spherical harmonics though, the volume would store a pure bleeding color.









Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!