OpenGL Performance and Optimization

Started by
12 comments, last by 21st Century Moose 12 years, 11 months ago

See: http://www.opengl.or...and_pixel_reads

And if you are interested, most GPUs like chunks of 4 bytes. In other words, RGBA or BGRA is prefered. RGB and BGR is considered bizarre since most GPUs, most CPUs and any other kind of chip don't handle 24 bits. This means, the driver converts your RGB or BGR to what the GPU prefers, which typically is BGRA.

Also read the section following it on "Image Precision" - http://www.opengl.or...Image_precision

I can produce a small test app that confirms this 100% - GL_BGRA is up to 6 times faster than GL_RGB on even NVIDIA hardware. Your hardware stores textures internally in BGRA order (more or less no matter what you specify for internalformat) so sending data in any other format will cause a slowdown by requiring it to be converted. Formats like GL_RGB are nothing more than cruddy old crap left over from the days of SGI workstations and 3DFX cards.
[/quote]

It does make sense, although I suppose the primary concern from some people (even if it isn't necessarilly a problem on today's hardware, although it depends on the game) is the wasted space. Most of the image data used in any game will be texture data, which doesn't always require an alpha channel, so saving that in a 24-bit format will save quite a bit of space.
Advertisement
It's not wasted space. OpenGL is going to expand it to 4-component anyway ("the driver converts your RGB or BGR to what the GPU prefers, which typically is BGRA") so not only have you not saved space, but you've also potentially slowed your program down. Also a good definition of "waste" is unnecessary use. Now, those extra 8 bits may not be used for rendering with (although you could do something clever like packing extra data into the alpha channel for use in a shader) but they will be used for making your program faster. Textures will load faster, and glTexSubImage2D updates will be faster. So by definition that's not "waste"; that's putting the extra storage to use.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


It does make sense, although I suppose the primary concern from some people (even if it isn't necessarilly a problem on today's hardware, although it depends on the game) is the wasted space. Most of the image data used in any game will be texture data, which doesn't always require an alpha channel, so saving that in a 24-bit format will save quite a bit of space.
Most hardware won't actually support 24-bit RGB textures natively though -- a lot of drivers will automatically add an alpha channel to pad your texture out to 32-bits.

Ideally, your RGB/BGR textures should be in DXT format, which provides a 6:1 compression ratio on 24-bit RGB inputs, saving RAM and improving the performance of texture fetches.
DXT sucks if your textures are sufficiently low-res, but otherwise yeah: if you're so concerned about memory usage then it's a far more practical (as in it actually works) approach.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement