Texture formats

Started by
24 comments, last by Chris_F 10 years, 4 months ago

GL_INTERNALFORMAT_PREFERRED is supposed to give you the internal format the the driver is actually going to use. So if GL_INTERNALFORMAT_PREFERRED returns GL_RGB then the driver is saying that is how it plans on storing the data internally. So either newer Radeon cards have hardware support for 24-bit texture formats, or AMD's implementation of ARB_internalformat_query2 has its pants on fire.

The specification is actually a lot looser than that, and internal formats just specify the minimum that the driver will give you; the driver is allowed to give you more. Section 8.5.1 of the core GL 4.3 spec clarifies that this is even true of the required sized internal formats.

It's not surprising that you get "GL_RGB8" as the preferred internal format here as the behaviour of these internal formats is to read the r, g and b components of a texture during sampling, but always return 1 for alpha; the preferred internal format must match this behaviour, so it's one that has the same behaviour. If it gave you an RGBA internal format instead, and if you actually used it, the behaviour during sampling may change (if, say, your source data had anything other than 255 in the alpha byte).

I think you're viewing all of this as if it were a lower-level description of what actually happens in hardware, whereas it's not. We're still at a quite high-level abstraction here, and GL isn't specifying anything to do with hardware; it's specifying what the implementation does.

Think of this as being somewhat similar to malloc in C; if you need - say - 64 bytes allocated, malloc can satisfy that request by allocating exactly 64 bytes. Or it can also satisfy it by allocating 128 bytes if that's friendlier for your hardware (perhaps by aligning the allocation to a hypothetical 128 byte wide cache line).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Advertisement


I think you're viewing all of this as if it were a lower-level description of what actually happens in hardware, whereas it's not. We're still at a quite high-level abstraction here, and GL isn't specifying anything to do with hardware; it's specifying what the implementation does.

Well that's a crying shame because having a way to query the implementation about the real details would actually be invaluable. I would love for the driver to be able to tell me what format it really wants through the API, that way I can convert it to that during program installation and rest easy at night knowing that the driver isn't going to be doing anything absurd behind my back every time I load a texture.

OK, so I'm still uncertain of something. I tried GL_INTERNALFORMAT_PREFERRED again, this time with GL_COMPRESSED_RGB8_ETC2. I am 100% certain that this card doesn't have hardware support for ETC2 textures, which means the driver has to be converting them to uncompressed or maybe re-compressing them as something else like S3TC/BPTC. Despite that, I'm getting back GL_COMPRESSED_RGB8_ETC2 as the preferred internal format.

Surely this isn't correct behavior, otherwise what good is GL_INTERNALFORMAT_PREFERRED at all?

Without ARB_internalformat_query2, drivers had no way to give you useful information. Even if they do not always provide useful hints to you, the fact that sometimes they can is incredibly useful. It's especially beneficial for integrated graphics solutions where you can still take huge performance hits by using a suboptimal format.

When the driver has to convert a compressed format on a more recent GPU, it may be using the same computational resources you use for shaders to do so rather than using dedicated hardware or using the CPU before sending it across the bus. I have absolutely no idea what your 5850 is doing with ETC2 specifically, but it is a DX11 card and so it does have pretty good computational abilities. Did you test ETC2 and find it is being decompressed by the x86 CPU before being sent to the video card? If that's happening, I agree it is weird for AMD to consider ETC2 a preferred format.

Without ARB_internalformat_query2, drivers had no way to give you useful information. Even if they do not always provide useful hints to you, the fact that sometimes they can is incredibly useful. It's especially beneficial for integrated graphics solutions where you can still take huge performance hits by using a suboptimal format.

When the driver has to convert a compressed format on a more recent GPU, it may be using the same computational resources you use for shaders to do so rather than using dedicated hardware or using the CPU before sending it across the bus. I have absolutely no idea what your 5850 is doing with ETC2 specifically, but it is a DX11 card and so it does have pretty good computational abilities. Did you test ETC2 and find it is being decompressed by the x86 CPU before being sent to the video card? If that's happening, I agree it is weird for AMD to consider ETC2 a preferred format.

I have not yet tested it, but I don't see why it should be considered a preferred format even if it is using GPU compute to do the conversion. Either way it is sub optimal, and absolutely pointless. Compressing to ETC2 only to convert to uncompressed means you save no space in graphics memory and take an unnecessary hit in visual quality. Re-compressing to another compressed format means you loose even more quality.

I have not yet tested it, but I don't see why it should be considered a preferred format even if it is using GPU compute to do the conversion. Either way it is sub optimal, and absolutely pointless. Compressing to ETC2 only to convert to uncompressed means you save no space in graphics memory and take an unnecessary hit in visual quality. Re-compressing to another compressed format means you loose even more quality.

Compression that is decoded on the card still provides a ton of benefits. It reduces the size of data transferred across the bus (a massive performance bottleneck). It reduces the size of your distributables.

Your query is basically saying "I have ETC2 assets, how do you want me to send them to you Mr. video card driver?" The driver doesn't know if you're going to be bottlenecked by bus bandwidth or not, so it wouldn't be right to tell you to send the data uncompressed. The driver also doesn't know if you're willing to trade even more loss of quality for performance, so it can't recommend a different compressed format. A new compressed format might even result in larger data sizes, going back to the bus bandwidth issue. The query most definitely cannot assume you still have access to the original art assets and can convert to a different compressed format from the original source, then change all necessary code and recompile to use a new format. So it tells you to use the compressed format you're already using.

What problem are you trying to solve anyways? From what you've said, I can't really tell what help to provide. It's obvious you're upset the query isn't giving you what you wanted, but people here can't really help you figure out how to get the information you want if you aren't even saying what it is you want.

I have not yet tested it, but I don't see why it should be considered a preferred format even if it is using GPU compute to do the conversion. Either way it is sub optimal, and absolutely pointless. Compressing to ETC2 only to convert to uncompressed means you save no space in graphics memory and take an unnecessary hit in visual quality. Re-compressing to another compressed format means you loose even more quality.

Compression that is decoded on the card still provides a ton of benefits. It reduces the size of data transferred across the bus (a massive performance bottleneck). It reduces the size of your distributables.

Your query is basically saying "I have ETC2 assets, how do you want me to send them to you Mr. video card driver?" The driver doesn't know if you're going to be bottlenecked by bus bandwidth or not, so it wouldn't be right to tell you to send the data uncompressed. The driver also doesn't know if you're willing to trade even more loss of quality for performance, so it can't recommend a different compressed format. A new compressed format might even result in larger data sizes, going back to the bus bandwidth issue. The query most definitely cannot assume you still have access to the original art assets and can convert to a different compressed format from the original source, then change all necessary code and recompile to use a new format. So it tells you to use the compressed format you're already using.

What problem are you trying to solve anyways? From what you've said, I can't really tell what help to provide. It's obvious you're upset the query isn't giving you what you wanted, but people here can't really help you figure out how to get the information you want if you aren't even saying what it is you want.

Well, basically what I'd like to do is compress all of my assets using a lossy codec similar to JPEG for distribution. During installation I would like to find out what texture formats are actually supported by the hardware (the ones that will give the best performance and quality) and then do a one time conversion (e.g. It discovers support for ETC2, so it encodes JPEG -> ETC2. If the card doesn't support ETC2, then maybe it discovers support for RGTC and does JPEG -> RGTC instead.) What I don't want to do is JPEG -> ETC2 thinking the GPU supports it natively and then end up having the driver do ETC2 -> RGTC silently. If that is the case then it would be better for me to do JPEG -> RGTC from the start.

You can use glHint on GL_TEXTURE_COMPRESSION_HINT, let the GPU pick the texture compression by using GL_COMPRESSED_RGB/A, then check what format was used.

If there's a more straightforward way, hopefully someone else can chime in.

You can use glHint on GL_TEXTURE_COMPRESSION_HINT, let the GPU pick the texture compression by using GL_COMPRESSED_RGB/A, then check what format was used.

If there's a more straightforward way, hopefully someone else can chime in.

Ok, that didn't do much good. I used GL_COMPRESSED_SRGB_ALPHA with glTexImage2D and glGetTexLevelParameteriv is telling me the internalformat is GL_COMPRESSED_SRGB_ALPHA.

Is this yet another bug in AMD's drivers? I quote the GL 4.3 spec:


Generic compressed internal formats are never used directly as the internal for-
mats of texture images. If internalformat is one of the six generic compressed
internal formats, its value is replaced by the symbolic constant for a specific com-
pressed internal format of the GL’s choosing with the same base internal format.
If no specific compressed format is available, internalformat is instead replaced by
the corresponding base internal format. If internalformat is given as or mapped
to a specific compressed internal format, but the GL can not support images com-
pressed in the chosen internal format for any reason (e.g., the compression format
might not support 3D textures), internalformat is replaced by the corresponding
base internal format and the texture image will not be compressed by the GL.

This topic is closed to new replies.

Advertisement