Jump to content
  • Advertisement
Sign in to follow this  
Chris_F

Am I using TexStorage correctly? (DRIVER BUGS)

This topic is 2141 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to compress a texture to BPTC using this code but I am getting a corrupted image.

glTextureStorage2DEXT(texture, GL_TEXTURE_2D, 10, GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, 512, 512);
glTextureSubImage2DEXT(texture, GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);

On the other had, if I use this it works.

glTextureImage2DEXT(texture, GL_TEXTURE_2D, 0, GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);

Share this post


Link to post
Share on other sites
Advertisement

glTexSubImage2D has some limitations when working with compressed formats.

Did you check the the command succeeded? (glGetError() returns GL_NO_ERROR)

Share this post


Link to post
Share on other sites

glTexSubImage2D has some limitations when working with compressed formats.

Did you check the the command succeeded? (glGetError() returns GL_NO_ERROR)

 

glGetError returns GL_NO_ERROR after both functions and there is nothing from KHR_debug either.

 

Also, if I do this:

 

glTextureStorage2DEXT(texture, GL_TEXTURE_2D, 10, GL_SRGB8_ALPHA8, 512, 512);
glTextureSubImage2DEXT(texture, GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);

 

My texture appears overexposed, but if I do this:

 

glTextureImage2DEXT(texture, GL_TEXTURE_2D, 0, GL_SRGB8_ALPHA8, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);

 

It appears correctly.

 

Am I using TextureStorage incorrectly or have I uncovered yet another bug in AMD's drivers?

Share this post


Link to post
Share on other sites

What happens when you use the non-DSA versions glTexSubImage2D() and glTexStorage2D()? Remember that you need to bind the texture for non-DSA functions.

Share this post


Link to post
Share on other sites

What happens when you use the non-DSA versions glTexSubImage2D() and glTexStorage2D()? Remember that you need to bind the texture for non-DSA functions.

 

glBindTexture(GL_TEXTURE_2D, texture);
glTexStorage2D(GL_TEXTURE_2D, 10, GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM_ARB, 512, 512);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);

 

Produced a garbled image same as the DSA version.

 

glBindTexture(GL_TEXTURE_2D, texture);
glTexStorage2D(GL_TEXTURE_2D, 10, GL_SRGB8_ALPHA8, 512, 512);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);

 

Produces an overexposed image same as the DSA version.

Share this post


Link to post
Share on other sites

If I recall correctly, there is no conversion between linear space to sRGB when updating textures, meaning that your source data must be in sRGB space when you create sRGB textures. That probably explains your over-exposure.

 

Not sure why the BPTC doesn't work.

Share this post


Link to post
Share on other sites

If I recall correctly, there is no conversion between linear space to sRGB when updating textures, meaning that your source data must be in sRGB space when you create sRGB textures. That probably explains your over-exposure.

 

Not sure why the BPTC doesn't work.

 

I don't understand. The source data is in sRGB space. Why would my texture look right when using glTexImage2D but not glTexStorage2D?

Share this post


Link to post
Share on other sites

Welp, must be bugged then. Time to report this to the AMD developers and start looking for an Nvidia graphics card.

Share this post


Link to post
Share on other sites

AMD is supposed to return GL_INVALID_ENUM when you don't use a valid internal format, so I guess it's a bug.

 

glTextureStorage2D requires a sized internal format. In the spec, you can find a complete list of valid options. You're passing in a compressed format, and compressed formats generally lack size information necessary for the allocation. I can't find anywhere suggesting GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM_ARB is a valid input for glTextureStorage2D either, so I'm pretty sure it's not valid.

Share this post


Link to post
Share on other sites

AMD is supposed to return GL_INVALID_ENUM when you don't use a valid internal format, so I guess it's a bug.

 

glTextureStorage2D requires a sized internal format. In the spec, you can find a complete list of valid options. You're passing in a compressed format, and compressed formats generally lack size information necessary for the allocation. I can't find anywhere suggesting GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM_ARB is a valid input for glTextureStorage2D either, so I'm pretty sure it's not valid.

 

Does that mean that there is no way to create an compressed immutable texture in OpenGL? That would seem like a bit of an oversight. The size of a compressed texture is always constant (128 bits per 4x4 block in the case of BPTC.) That also doesn't explain why a sRGB texture created with glTexStorage2D doesn't behave like a sRGB texture.

 

EDIT: As stated here: http://www.khronos.org/opengles/sdk/docs/man3/xhtml/glTexStorage2D.xml

 

 


internalformat must be one of the sized internal formats given in Table 1, or one of the compressed internal formats given in Table 2 below.

 

I don't see any BPTC formats listed there, only ETC2 and EAC, which I think is really odd considering that BPTC is part of the core spec, same as ETC2 and EAC. So I guess the fact that GL_INVALID_ENUM isn't being generated is a bug. And there still is the matter of the sRGB texture. dry.png

 

EDIT2: I just noticed that page is for OpenGL ES. Looking at the opengl.org reference, none of those compressed formats are listed. So IDK, maybe ES supports some compressed formats and non-ES supports no compressed formats. How that would even be possible in light of ARB_ES2_compatibility and ARB_ES3_compatibility is beyond me. It wouldn't be the first time I spotted a mistake in the OpenGL registry (which is kind of annoying btw.)

 

EDIT3: As stated here: https://www.opengl.org/wiki/Texture_Storage

 

 


The internalformat? parameter defines the Image Format to use for the texture. For the most part, any texture type can use any image format. Including the compressed formats. Note that these functions explicitly require the use of sized image formats. So GL_RGBA? is not sufficient; you have to ask for a size, like GL_RGBA8?.

 

Nothing but contradictions. This is absolutely bonkers!

Edited by Chris_F

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!