Texture Arrays

Started by
3 comments, last by Vincent_M 9 years, 4 months ago

I got texture arrays working, and they seem like they could really cut down on gl* calls when drawing: I upload 1 uniform for the sampler, and only need a single texture active for glActiveTexture(). The only problem is, what if I had different-sized textures? Would I have to pad my textures? For example, what if I had a 1024x1024 diffuse map with a 512x512 normal map? What if I wanted to store my normal map in a different format, such as a floating-point texture? Are there any texture array workarounds for this? Also, are texture arrays considered pretty important to use for models nowadays? They seem like a workaround to cut down on queries to the GPU per draw call.

Advertisement

The basic texture arrays requires the same texture size and format. Texture array help you to increase the batch size to avoid texture switches. Still you will have usually shader switches, large uniform updates (animated model) etc. which are not easily batches.

To overcome the texture size and format, just use multiple texture arrays. In the case of size you could create texture atlases, which helps to keep the number of used texture units lower.

I believe that the sparse texture extension would allow you to create a 1024x1024xN texture array and then for the slice with the 512x512 texture you would simply leave the unused mip level unallocated. If you need for example both RGBA8 and RGBA32F then create two different texture arrays.

You can upload your 512x512 texture into mip level 1 (assuming 0-index) of another layer, and in your shader specify an lod bias so you effectively use mip level 1 as your "base level".

I believe that the sparse texture extension would allow you to create a 1024x1024xN texture array and then for the slice with the 512x512 texture you would simply leave the unused mip level unallocated. If you need for example both RGBA8 and RGBA32F then create two different texture arrays.

I've read about sparse textures, but I never knew the context in which they'd be helpful. I always thought they'd be useful for doing things like procedural terrain rendering, and John Carmack's megatextures. That's the thing with learning modern OpenGL from The OpenGL Programming Guide (8th Edition): it goes into detail on how each API function works, but doesn't provide too much context on how they could be in real-world techniques. Providing some examples would really help reinforce what the API's actually doing. Then, once I'm familiar with how it works, and why it's used like that, I can develop my own techniques from there. Requiring all textures to be of the same format makes sense to me too. I could have multiple texture arrays based on different formats required depending on the rendering technique I'm using for my meshes.

I'm finding the OpenGL SuperBible (6th Edition) to be more helpful there!

You can upload your 512x512 texture into mip level 1 (assuming 0-index) of another layer, and in your shader specify an lod bias so you effectively use mip level 1 as your "base level".

I hadn't thought of that... And it sounds like a good idea. Since I'm only expecting power of 2 textures, I could use the largest texture as the highest resolution mip level, and the lowest resolution texture as my starting point mip-level. I read about mipmap bias controls, but I'm not too familiar with how to actually use them yet. Less memory is wasted with your solution compared to mine, though. My idea was to upscale the smaller textures to make them larger, but that quadruples the amount of texture memory exponentially, per mip-level. For example, going from 256x256 to 1024x1024 would require my smaller texture to bloat 16x in memory. I could also down-sample too, which would probably yield better results, as it follows mipmap methodology, and reduces the memory footprint. Of course, for optimal quality and performance, same-sized textures should probably be provided.

This topic is closed to new replies.

Advertisement