Jump to content

  • Log In with Google      Sign In   
  • Create Account

[DX9] RSM + Volume Rendering + GI


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 10 September 2013 - 03:15 AM

Hi all,

 

I've just completed a basic support for Global Illumination for our proprietary DX9 based engine. RSM is used to to compute the first bounce of indirect illumination. An interpolation scheme is used to compute the Indirect Illumination for the whole scene. Then the final scene lighting is composed using direct + indirect contribution.....

 

Next Step for us is to use one of the following tech. to increase performance and quality:

 

  • Light Propagation Volume
  • Radiance Hints

 

So here come my question: how and is it possible to render a volume texture in DX9?

I mean, let's say I want to sample the RSM and store the SH into a 3D Grid using a volume texture.....is it possible using DX9 or should we go for DX10-DX11?

The goal is to reuse that computed volume texture for interpolate (indirect) light into the final scene....

 

If yes...can someone point me into the right direction (papers...link to website)

 

Thanks in advance

 

Mauro

 



Sponsor:

#2 Styves   Members   -  Reputation: 1078

Like
1Likes
Like

Posted 10 September 2013 - 04:44 AM

The original LPV approach was implemented on DX9 using unwrapped 3D textures IIRC - instead of rendering to a 3D texture, they used a 2D texture that was H pixels tall and W*D pixels wide (H being height, W being width and D being depth). Then when they rendered to a "depth layer" they just shifted the X coordinate when reading/writing to it.


Edited by Styves, 10 September 2013 - 04:44 AM.


#3 InvalidPointer   Members   -  Reputation: 1445

Like
0Likes
Like

Posted 10 September 2013 - 05:08 AM

Having looked at the shader source, I can confirm Styves' explanation.

 

 

WITH THAT SAID, it's very much an ugly hack. Direct3D9 is literally over ten years old, and it's very unlikely that many Direct3D9-only cards capable of playing modern, reasonably-demanding games even exist. The same technique can be implemented way more elegantly using modern APIs, *please* don't continue the myth that D3D9 is relevant anymore.


clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#4 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 10 September 2013 - 06:20 AM

Thanks for reply guys.....I won't probably implement this on DX9 and going straight to DX11 for this feature (eventually). Too much tricky and time consuming considering that most of the card nowdays support DX10/DX11

 

Thanks



#5 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 10 September 2013 - 06:21 AM

The original LPV approach was implemented on DX9 using unwrapped 3D textures IIRC - instead of rendering to a 3D texture, they used a 2D texture that was H pixels tall and W*D pixels wide (H being height, W being width and D being depth). Then when they rendered to a "depth layer" they just shifted the X coordinate when reading/writing to it.

So every 2d texture can be seen as a slice of "depth" of the volume right?



#6 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 10 September 2013 - 06:57 AM

Exactly. :)



#7 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 10 September 2013 - 09:08 AM

so do you think they'll finally copy all the slices into a volume texture (using pseudo memcpy)?

And then use this "composed" volume texture into the shader?

I guess they won't pass every slice as shader variables....and use sort of volume texture instead...am I wrong?

EDIT:It would be' too slow....so their approach is 32x32x32 texture on dx9? No problem! Just use a 1024x32 texture and use x%32 to compute the correct Z....

#8 Styves   Members   -  Reputation: 1078

Like
1Likes
Like

Posted 10 September 2013 - 05:06 PM

Your edit is exactly what I mentioned in my first post :) They use an unwrapped texture (1024x32 like in your example, or 4096x64 if you want to max out the resolution for the technique) and then offset by 32 (or 64) pixel increments for their "depth" layers.



#9 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 11 September 2013 - 02:45 AM

Your edit is exactly what I mentioned in my first post smile.png They use an unwrapped texture (1024x32 like in your example, or 4096x64 if you want to max out the resolution for the technique) and then offset by 32 (or 64) pixel increments for their "depth" layers.

you're right I just figured this late in the evening :-) I'll probably try this in the next few days....

thx


Edited by mauro78, 11 September 2013 - 02:45 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS