Ok guys, I'm trying to wrap my head around some global illuminations tricks and I have some questions.
Since I'm planning to use DirectX 9.0c which doesn't support rendering to volume texture I indent to use a 2D unwrapped version ( for example 1024x32) and use it as a render target. So far, so good.
About propagation technique, I have this idea : instead of propagating the light energy through a volume in the shader, I plan to simply render some kind of hemispherical geometry ( probably instanced ) with some kinf of gradient texture applied to it, to simulate light fading to the distance.
I mean, I have this hemispherical mesh and when I generate my VPL's positions, normals and colors, I simply render such a mesh in that position , oriented torwards the normal and using appropriate color. As a render target I'm planning to use the unwrapped texture, mentioned above 1024x32, and camera looking straight down with ortho prpjection to render every vertical slice of the scene to a part of that 1024x32 texture, while offseting the viewport accoringly. In total 32 vertical slices. I should adjust the near and far clip planes in order to render hemispheres that fall in that horozintal slice. I will probably run into issues, because if slices are too thin, and my hemispheres are too large, they will be clipped by the near and far clipping planes and nothing will be renendered even if I disable backface culling.
How can I cope with this problem ? Should I fill the hemisperes with "geometry" too. so if clipped by the planes the inside vertices still to be rendered ? Should I study Point based rendering, or there is something neat and easy I'm missing ?
As a result, a should have a 1024x32 texture, which would contain the bounced light of the scene as horizontal slices.
Because I'm using a 2D texture I can't make use of hardware trilinear interpolation as with volume textures. I need to do this on my own, sampling every rendered pixel 18 times for "nearby" texels and averaging them, which seems slow.
Can I instead copy the 2D 1024x32 texture to a volumetexture, and make use of hardware interpolation between adjacent texels ?
Can I do it at onces, not slice by slice..? I hope memory layout of same pixel format 2d and volume textures are the same..
So I can simply lock the volume texture, lock the 2d texture and memcpy the bytes...
Should I take different route to GI instead, under DirectX 9.0 ?
Thanks for your help, in advance.