Selecting texture formats

Started by
2 comments, last by MJP 8 years, 10 months ago
When creating a new dynamic texture, there are so many texture formats to choose from! Obviously you pick what is right for the job, as textures are used for a lot more than just texturing, and different image sources have different types of image data.

I'm assuming that data on the GPU is converted on the fly to whatever the GPU wants to use, meaning you can use different texture formats and everything still "works", but I could do with an expert opinion or two on this subject.

Is there anything I should keep in mind with regards to texture format selection and consistency across an application? I just read here that BGRA is a bit more efficient on the GPU than RGBA, but then I looked back at my existing code and I see I have some texture initialization code that uses RGBA and also that my swap chain uses BGRA. Am I potentially creating headaches for myself, or efficiency issues, if I don't try to use the same format everywhere? Is the chosen format abstracted away from me on the GPU? Anything else I should know?
Advertisement

I'm assuming that data on the GPU is converted on the fly to whatever the GPU wants to use, meaning you can use different texture formats and everything still "works",

The texture-fetch hardware converts from the in-memory representation into floats, on demand as the shader executes sampling instructions.
No matter whether it's 8bit, 8bit+sRGB, DXT/BC, 16bit, half-float... they all end up the same in the shader. Different formats may have different costs on different hardware though -- e.g. a GPU might be able to fetch a 32bpp pixel in one cycle, but have to spend 2 cycles to fetch a 64bpp pixel, etc...

I just read here that BGRA is a bit more efficient on the GPU than RGBA, but then I looked back at my existing code and I see I have some texture initialization code that uses RGBA and also that my swap chain uses BGRA. Am I potentially creating headaches for myself, or efficiency issues, if I don't try to use the same format everywhere?

Windows is a pain the arse when it comes to RGBA vs BGRA... (yes windows, not the GPU).
IIRC, D3D9 required you to use a BGRA swap chain, but then D3D10 required you to use RGBA... and now with D3D11 I think you can choose.
As far as I can tell, this must've been due to interactions with the Windows software-rendered compositor -- it must've been hardcoded to use BGRA pixels under WinXp?

On the GPU side of things, most GPUs have a fixed-function configurable "crossbar" between the shader and the texture memory, which allows it to swizzle the components for free. e.g. the driver can tell the GPU to convert from RGBA memory to BGRA if it wanted to... or GARB, or any other combination it needs.
So at the lowest level, there'd be no difference between RGBA and BGRA format textures, except for this swizzle state. Both of them are the INT_8_8_8_8_UNORM format, but with different swizzle masks.

Anything else I should know?

Be careful about format naming conventions, as these change depending on the API.
D3D9 names the channel occupying the most-significant-bits first (big endian) -- so 32bit RGBA means that A is masked by 0xFF and R is masked by 0xFF000000.
D3D11 names the channel occupying the least-significant-bits first (little endian) -- so 32bit RGBA means that A is masked by 0xFF000000 and R is masked by 0xFF.

So D3D11 RGBA == D3D9 ABGR, and D3D11 BRGA == D3D9 ARGB!

Thanks for the detailed write-up. So I guess for me, the moral of the story, given that I'm only supporting D3D11, is to just be consistent in my code, and to use the format that works best for me when writing dynamic texture data with the CPU.

The texture-fetch hardware converts from the in-memory representation into floats, on demand as the shader executes sampling instructions.
No matter whether it's 8bit, 8bit+sRGB, DXT/BC, 16bit, half-float... they all end up the same in the shader.


Small correction here: the hardware will convert UNORM, SNORM, FLOAT, and SRGB formats to single-precision floats when sampled by a shader. UINT and SINT formats are always interpreted as integers, and will be read as a 32-bit int or uint in the shader. Since they're always interpreted as integers, POINT is the only valid filtering mode for the integer formats.

This topic is closed to new replies.

Advertisement