[DX9] RSM + Volume Rendering + GI

Started by
7 comments, last by mauro78 10 years, 7 months ago

Hi all,

I've just completed a basic support for Global Illumination for our proprietary DX9 based engine. RSM is used to to compute the first bounce of indirect illumination. An interpolation scheme is used to compute the Indirect Illumination for the whole scene. Then the final scene lighting is composed using direct + indirect contribution.....

Next Step for us is to use one of the following tech. to increase performance and quality:

  • Light Propagation Volume
  • Radiance Hints

So here come my question: how and is it possible to render a volume texture in DX9?

I mean, let's say I want to sample the RSM and store the SH into a 3D Grid using a volume texture.....is it possible using DX9 or should we go for DX10-DX11?

The goal is to reuse that computed volume texture for interpolate (indirect) light into the final scene....

If yes...can someone point me into the right direction (papers...link to website)

Thanks in advance

Mauro

Advertisement

The original LPV approach was implemented on DX9 using unwrapped 3D textures IIRC - instead of rendering to a 3D texture, they used a 2D texture that was H pixels tall and W*D pixels wide (H being height, W being width and D being depth). Then when they rendered to a "depth layer" they just shifted the X coordinate when reading/writing to it.

Having looked at the shader source, I can confirm Styves' explanation.

WITH THAT SAID, it's very much an ugly hack. Direct3D9 is literally over ten years old, and it's very unlikely that many Direct3D9-only cards capable of playing modern, reasonably-demanding games even exist. The same technique can be implemented way more elegantly using modern APIs, *please* don't continue the myth that D3D9 is relevant anymore.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

Thanks for reply guys.....I won't probably implement this on DX9 and going straight to DX11 for this feature (eventually). Too much tricky and time consuming considering that most of the card nowdays support DX10/DX11

Thanks

The original LPV approach was implemented on DX9 using unwrapped 3D textures IIRC - instead of rendering to a 3D texture, they used a 2D texture that was H pixels tall and W*D pixels wide (H being height, W being width and D being depth). Then when they rendered to a "depth layer" they just shifted the X coordinate when reading/writing to it.

So every 2d texture can be seen as a slice of "depth" of the volume right?

Exactly. :)

so do you think they'll finally copy all the slices into a volume texture (using pseudo memcpy)?

And then use this "composed" volume texture into the shader?

I guess they won't pass every slice as shader variables....and use sort of volume texture instead...am I wrong?

EDIT:It would be' too slow....so their approach is 32x32x32 texture on dx9? No problem! Just use a 1024x32 texture and use x%32 to compute the correct Z....

Your edit is exactly what I mentioned in my first post :) They use an unwrapped texture (1024x32 like in your example, or 4096x64 if you want to max out the resolution for the technique) and then offset by 32 (or 64) pixel increments for their "depth" layers.

Your edit is exactly what I mentioned in my first post smile.png They use an unwrapped texture (1024x32 like in your example, or 4096x64 if you want to max out the resolution for the technique) and then offset by 32 (or 64) pixel increments for their "depth" layers.

you're right I just figured this late in the evening :-) I'll probably try this in the next few days....

thx

This topic is closed to new replies.

Advertisement