Sign in to follow this  
Camembert

[Dx10] Render to 3d volume

Recommended Posts

Camembert    122
So, I have a 3d texture, and I want to fill it using a volume function (say, 3d perlin noise on the gpu). As I'm rather new to this, I have several questions: -How do I set a particular slice of a 3d texture as a render target? Is this even possible (I know it wasn't in dx9)? -Is there a way to fill the volume in a single rendering pass (instead of #slices of passes), and using, say, the 3d texture coordinates as parameters to the volume function? -Without reading back the data on the CPU, is there a way to "shift" the slices in the 3d texture by a certain index, or does the whole thing need to be re-volumized? Thanks! :-)

Share this post


Link to post
Share on other sites
Nik02    4348
1: Yes, this is possible; just create a view that maps the volume as an array of 2d surfaces. You can create a view in the "middle" of a resource by using offsets, if you need a single 2D slice view in your application.

2: Yes; using a large 2d view, "unfold" the volume data and in pixel shader, calculate the 3d coordinate corresponding to the current 2d pixel. Then use the calculated 3d coordinate to feed your volumetric function (noise or whatever).

3: "Shifting" a texture usually needn't be more complicated than offsetting the texture coordinate you're addressing said texture with. If this is not your intention, please clarify the question.

Share this post


Link to post
Share on other sites
Camembert    122
1) Neat! Forgot about offsets, thanks :)

2) I'm unsure as how to unfold the volume and bind it as a target? I suppose the pixelshader aspect would be rather easy though. Do you have any samples or explanations on this technique? Maybe I'm missing something obvious :) (Again, I'm rather new to this).

3) I was thinking about raytracing a volume field that would "move" through a 3d texture. It was a stupid question, since, as you mention, I could render to incremented slices, store the initial offset and pass that data to the shader.

Thanks for the reply!

Share this post


Link to post
Share on other sites
Nik02    4348
2: While the SDK states that you should create a view of same dimensionality (2d, 3d etc.) than the resource you want to associate that view to, this is not strictly mandatory. You just need to be careful that you actually know how the memory is laid out.

This may have changed, if drivers have added validation regarding matching dimensionality of resources and their views. I haven't tried this in a while, but in the early days of D3D10 (beta era) it did work. I don't see why it wouldn't work still, because volume<->slice view mapping works in a very similar way.

In case this won't work anymore, you can still treat the volume as an array of 2d slices and loop through them in geometry shader, setting the destination slice (render target index) in each loop iteration. The only limitation of this technique is that you can only bind a small amount of render targets at a given time (8 IIRC), so if you have a deep volume, you have to do several draw calls to fill it. This is still much better than one slice at a time, however.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this