Jump to content
  • Advertisement
Sign in to follow this  

[DX10] working with Texture arrays (render to and sample in shaders)

This topic is 3039 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In order to support multiple materials in my deferred renderer, I want to store the lighting response for all materials in a texture array.
Each material occupies one slice of the array.
The brdf textures are already working, but I'm having trouble rendering to the individual array slices, and sampling them.

1) Rendering to texture arrays.
As I understand, if you use a Texture2D as a rendertarget, and it has an ArraySize higher than 1, you need to specify which slice to render to using the SV_RenderTargetArrayIndex semantic, right?
Unfortunately you can't seem to use that semantic in the vertex shader, so this requires using a 'dummy' geometry shader, that passes the vertex shader values to the pixel shader, and adds the array index.
Is this the correct way of doing this?
Does this allow rendering the individual slices in different passes, or do I need to render all slices in a single pass?

2) Sampling from texture arrays.
This one, I'm even more confused about.
In HLSL you can define textures in many different ways, none of them throwing any errors:
Texture2D brdf;
Texture2D brdf[2];
Texture2DArray brdf;
Texture2DArray brdf[2];

Which one is the correct way?

The docs also indicate that when using texture arrays, you sample them using float3 instead of float2, with the last parameter being the array slice.
I assume getting the second slice (brdf[1]) would work as follows:
brdf.Sample(sampler, float3(texCoord, 1.0f));

Is this correct?

Whatever I do, I always seem to be working on the first array slice, I can also use completely out of bound values (even negative) wherever I want, and I'm always rendering to the first slice and sampling from that one as well.

3) Texture3D
I was thinking about using 3d volume textures instead of 2d texture arrays, but I couldn't find any examples on them, so I don't know how to render to them or sample from them either :(

How does this work?


Share this post

Link to post
Share on other sites
1) using SV_RenderTargetArrayIndex is correct. All rendered objects in the pixel shader will be placed in the slice specified by this index. Since the shader can program this, you can basically assign a slice at the per primitive level. If a primitive needs to go to more than one slice, then you can duplicate it in the geometry shader.

2) You want Texture2DArray brdf; The other ones, such as Texture2D brdf[2] or Texture2DArray brdf[2] will allocate unique consecutive texture slots for those objects. That isn't what you want.

3) You won't go this route, but sampling a Texture2D brdf[2]; object would mean that you'd only have uniform sampling and that all shaders would have to take the same path - your shader would be coded brdf.Sample( ... ); Where i was a uniform constant variable.

4) I believe, though I've never done it, that Texture3D render targets require each 2D slice to be wrapped in a render target view. That would limit you to 8 slices at a time.

5) I don't know how many materials you're rendering, but I expect that these textures may end up quite sparse, which would be a problem for you memory wise.

Share this post

Link to post
Share on other sites
Thanks, got it working now :)
Got confused with the different kinds of arrays you can have.

5) I don't know how many materials you're rendering, but I expect that these textures may end up quite sparse, which would be a problem for you memory wise.

I've thought about this some time, but having only a single lighting model is simply insufficient.
I can think of 2 ways to allow multiple materials in a deferred renderer, one is to use dynamic branching, and the other to use brdf texture lookups (either array or 3d texture)
Dynamic branching isn't really an option, it's not going to be fast enough.
Brdf textures seem like a good idea, and gpu gems 2 & 3 have also hinted at Stalker and tabula rasa using a similar system (at least I guess they did).

Right now my brdf textures are 256x256, but I'm pretty confident I can make them even smaller without quality loss.
That's 262kb per array slice, which I think is fine.
To be honest I'm more worried about the bandwidth then I am about running out of gpu memory :)


Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!