Sign in to follow this  
Chris_F

Array of samplers vs. texture array

Recommended Posts

Chris_F    3030

I'm curious as to what the tradeoffs would be for using an array of samplers with the bindless texture extension over using texture arrays. It seems like there would likely be more overhead with sampler arrays. On the other hand, with texture arrays you are limited to 2048 slices on a lot of modern GPUs. You could potentially have a lot more than that with bindless, and each texture can have it's own dimensions and sampler properties.

Share this post


Link to post
Share on other sites
Hodgman    51234

A texture array is a native resource type designed to let you access pixels in a big 3d volume of mip-mapped 2d slices.

 

An array of samplers is syntactic sugar over declaring a large number of individual samplers.

It's the same as having sampler0, sampler1, sampler2, etc... If you're using them in a loop that the compiler can unroll, then they're fine. If you're randomly indexing into the array, then I'd expect waterfalling to occur (i.e. if(i==0) use sampler0 elif(i==1) use sampler1, etc...)

Share this post


Link to post
Share on other sites
Ohforf sake    2052

I'm not sure but I think Kepler can genuinely "address" textures.

 

Here is a talk which (amongst other things) is about how arrays and bindless can be used together. Apparently there is a small overhead for bindless textures due to a "Texture header cache".

 

https://developer.nvidia.com/content/how-modern-opengl-can-radically-reduce-driver-overhead-0

Share this post


Link to post
Share on other sites
_the_phantom_    11250
The main issue with 'bindless' is that extra fetch/indirection to get the data to look up the texture; when something is bound that cost is paid up front and shoved somewhere shared.

As to the OP's question;
If you take something like AMD's GCN then a 'texture' is nothing more than a header block of 128 or 256 bytes in size which contains a pointer to the real texture data. When you sample the instructions are issued to query that block and then go off the fetch the data.

When it comes to a TextureArray then the index into a layer is part of the address generated and you are only going via one resource handle; textureHandle->Fetch(i, x , y); in C++-like code.

When it comes to an array of samplers/textures then you are effectively indexing into an array of multiple resource handles; textureHandles[i]->Fetch(x,y); at which point, as Hodgman mentions, you enter a world of divergent flow with multiple if-else chaining happening and the normal issues that brings.

While the above is based on the public AMD GCN Southern Islands data I would be surprised if NV didn't work in a similar manner.

Bindless, as mentioned, slots into this with an indirection on the texture data handle as it'll have to come from another source instead of pre-loaded into a local cache.

Where it really gets fun is when you combine bindless, sparse and array textures to reserve a large uncommitted address space and just page data in and out on demand (16k * 16k * 2048 is a HUGE address range!) - with the recently released (but unspec'd!) AMD_sparse_texture_pool extension you can even control where the backing store is pulled from and limit the size smile.png

Share this post


Link to post
Share on other sites
Chris_F    3030

A texture array is a native resource type designed to let you access pixels in a big 3d volume of mip-mapped 2d slices.

 

An array of samplers is syntactic sugar over declaring a large number of individual samplers.

It's the same as having sampler0, sampler1, sampler2, etc... If you're using them in a loop that the compiler can unroll, then they're fine. If you're randomly indexing into the array, then I'd expect waterfalling to occur (i.e. if(i==0) use sampler0 elif(i==1) use sampler1, etc...)

 

I'm sure on old hardware that is the case. Old hardware doesn't even support bindless textures, so that is not really an issue anyway.

 

A quick experiment that renders 1024 random sprites per frame from an array of 32 images shows a performance difference of about 0.05ms when comparing a TEXTURE_2D_ARRAY to an array of bindless handles stored in a SSBO.

 

Edit: I just changed the SSBO to a UBO and now there is no performance difference at all.

Edited by Chris_F

Share this post


Link to post
Share on other sites
Hodgman    51234

 

A texture array is a native resource type designed to let you access pixels in a big 3d volume of mip-mapped 2d slices.

 

An array of samplers is syntactic sugar over declaring a large number of individual samplers.

It's the same as having sampler0, sampler1, sampler2, etc... If you're using them in a loop that the compiler can unroll, then they're fine. If you're randomly indexing into the array, then I'd expect waterfalling to occur (i.e. if(i==0) use sampler0 elif(i==1) use sampler1, etc...)

 

I'm sure on old hardware that is the case. Old hardware doesn't even support bindless textures, so that is not really an issue anyway.

 

A quick experiment that renders 1024 random sprites per frame from an array of 32 images shows a performance difference of about 0.05ms when comparing a TEXTURE_2D_ARRAY to an array of bindless handles stored in a SSBO.

 

Edit: I just changed the SSBO to a UBO and now there is no performance difference at all.

 

That's really cool. What kind of GPU are you using?

I'd be interested to see what your shader code looks like... You've got an array of samplers inside a UBO, and then you just index that array when fetching texture samples?

 

I wonder if these performance characteristics are reliable for all vendors that support the necessary extensions... Which vendors do support bindless; is it just nVidia?

Share this post


Link to post
Share on other sites
Chris_F    3030

 

 

A texture array is a native resource type designed to let you access pixels in a big 3d volume of mip-mapped 2d slices.

 

An array of samplers is syntactic sugar over declaring a large number of individual samplers.

It's the same as having sampler0, sampler1, sampler2, etc... If you're using them in a loop that the compiler can unroll, then they're fine. If you're randomly indexing into the array, then I'd expect waterfalling to occur (i.e. if(i==0) use sampler0 elif(i==1) use sampler1, etc...)

 

I'm sure on old hardware that is the case. Old hardware doesn't even support bindless textures, so that is not really an issue anyway.

 

A quick experiment that renders 1024 random sprites per frame from an array of 32 images shows a performance difference of about 0.05ms when comparing a TEXTURE_2D_ARRAY to an array of bindless handles stored in a SSBO.

 

Edit: I just changed the SSBO to a UBO and now there is no performance difference at all.

 

That's really cool. What kind of GPU are you using?

I'd be interested to see what your shader code looks like... You've got an array of samplers inside a UBO, and then you just index that array when fetching texture samples?

 

I wonder if these performance characteristics are reliable for all vendors that support the necessary extensions... Which vendors do support bindless; is it just nVidia?

 

 

basically

layout(binding = 0, std430) readonly buffer texture_buffer
{
    sampler2D textures[];
};
 
in vec2 tex_coord;
flat in int id;
out vec4 frag_color;

void main()
{
    frag_color = texture(textures[id], tex_coord);
}
GLuint textures[32] = { 0 };
GLuint64 texture_handles[32] = { 0 };
glGenTextures(32, textures);

for (int i = 0; i < 32; ++i) {
    GLuint texture = textures[i];
    glTextureStorage2DEXT(texture, GL_TEXTURE_2D, 8, GL_SRGB8_ALPHA8, 128, 128);
    glTextureSubImage2DEXT(texture, GL_TEXTURE_2D, 0, 0, 0, 128, 128, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer + i * 65536);
    glGenerateTextureMipmapEXT(texture, GL_TEXTURE_2D);
    glTextureParameteriEXT(texture, GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTextureParameteriEXT(texture, GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTextureParameteriEXT(texture, GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTextureParameteriEXT(texture, GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    GLuint64 handle = glGetTextureHandleARB(texture);
    glMakeTextureHandleResidentARB(handle);
    texture_handles[i] = handle;
}
 
glNamedBufferStorageEXT(buffer, sizeof(texture_handles), texture_handles, 0);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, buffer);

Tested this on a GTX 760 which is based on the GK104 version of Kepler. To my knowledge Kepler is the only thing which currently supports bindless textures, but AMD and Intel could have it soon enough. It probably will never be available on older GPUs, as it is really more of an OpenGL 5 hardware feature.

Edited by Chris_F

Share this post


Link to post
Share on other sites
Chris_F    3030

http://www.slideshare.net/CassEveritt/approaching-zero-driver-overhead

 

According to this it seems that the index should remain constant per instance/draw. Just for fun I tested to see what would happen if I chose different index on a per-pixel basis. Surprisingly it seems to work fine, though at about half the frame rate.

Edited by Chris_F

Share this post


Link to post
Share on other sites
MJP    19755

 

A texture array is a native resource type designed to let you access pixels in a big 3d volume of mip-mapped 2d slices.

 

An array of samplers is syntactic sugar over declaring a large number of individual samplers.

It's the same as having sampler0, sampler1, sampler2, etc... If you're using them in a loop that the compiler can unroll, then they're fine. If you're randomly indexing into the array, then I'd expect waterfalling to occur (i.e. if(i==0) use sampler0 elif(i==1) use sampler1, etc...)

 

I'm sure on old hardware that is the case. Old hardware doesn't even support bindless textures, so that is not really an issue anyway.

 

A quick experiment that renders 1024 random sprites per frame from an array of 32 images shows a performance difference of about 0.05ms when comparing a TEXTURE_2D_ARRAY to an array of bindless handles stored in a SSBO.

 

Edit: I just changed the SSBO to a UBO and now there is no performance difference at all.

 

 

I'm quite sure that the cost will go up if you were to have divergence within a single warp. Try randomizing your index based on the pixel coordinate and see how it fares.

Share this post


Link to post
Share on other sites
Chris_F    3030

I'm quite sure that the cost will go up if you were to have divergence within a single warp. Try randomizing your index based on the pixel coordinate and see how it fares.

 

Just tried rendering a full screen quad that uses a hash of the screen coordinates to pick a random number to index an array of 262,144 sampler handles to 1x1 textures. Was only getting 8 fps. Would liked to have tried and even larger array, but I was getting GL_INVALID_OPERATION at 521,728 textures. With 256 textures I get 22 fps. My understanding is that Kepler supports this while AMD's GCN flat out does not.

Edited by Chris_F

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this