Jump to content
  • Advertisement
kpirolt

OpenGL amd bitmask not uploaded in texture

Recommended Posts

Posted (edited)

Hi, 

I have the source code and a documentation of the problem available at https://bitbucket.org/justkarli/lookuptextureexample/src/master/

The problem

When uploading bit (mask) information into a floating point texture, the bitmask value is either rounded to zero, clamped or not written at all.

The case

We want to encode additional material information into an own look up texture. The values that we want to encode are bitmasks or float values. For each material, its properties are stored in two 32-bit float pixels.

In the example provided, we encode the bitmask values either via reinterpret_cast or with glm::intBitsToFloat.

When we debug the provided example with renderdoc and view the uploaded texture as raw buffer via renderdoc, the bitmask values set on the CPU are clearly set to 0.

amd_rawbuffer_view.thumb.png.ce195e5a0fec37b68a3f4502dfe40926.png

Expected values for those 0 entries are:

main:252 lutInfos.push_back(reinterpret_cast<float&>(3));
main:258 lutInfos.push_back(glm::intBitsToFloat(11));
main:268 lutInfos.push_back(reinterpret_cast<float&>(0x1001));
main:273 lutInfos.push_back(glm::intBitsToFloat(7)); 
 

When viewing the example with an installed nvidia card (gtx 1060, gtx 1080), we can not reproduce the issue: nvidia_rawbuffer_view.png.eb378481976783a7e2b55e3790dadc00.png

The case with reading the bitmask

To avoid beeing deceived by the raw_buffer & texture visualization of renderdoc, we added a branch that tries to read the bitmask via floatBitsToUint(sampledTexture.x) and decide the color output dependent on the value. Which unfortunately supports the visualization of renderdoc and it seems to be a AMD driver issue.

Tests

We tested & reproduced the error with the following AMD cards:

  • AMD Radeon RX Vega 64: Radeon Software Adrenalin 2019 Edition 19.3
  • AMD Radeon RX Vega 64: Radeon Software Adrenalin 2019 Edition 19.1.1 (25.20.15011.1004)
  • AMD Radeon RX Vega 64: Default windows driver
  • AMD Radeon RX 480: 25.20.15027.9004

Am I am missing something? Are you able to reproduce the issue? 

Cheers, karli

amd_rawbuffer_view.png

Edited by kpirolt

Share this post


Link to post
Share on other sites
Advertisement

Rgba32f seems not a good choice if its really a driver dependant issue then you should consider changing tex internal format and decoding values via shader, not to mention that it seems you gone all wrong...

Define what do you mean by naming look up texture.

Additional bits represent what? position of a vertex (thus it seems stupid since it should be a material definition)

From the way of my thinking for each pixel you set 32 true false statements that you cannot read back.

Or you cant even set these values and upload that to gpu?

For me Bitmask is misinterpreted here too, define what do you mean by bitmask....

Share this post


Link to post
Share on other sites
8 hours ago, kpirolt said:

When uploading bit (mask) information into a floating point texture, the bitmask value is either rounded to zero, clamped or not written at all.

The numbers in your example (3, 7, 11, 0x1001) are all denormalized float values when reinterpreted (smaller than the smallest possible valid floating point value!). These kinds of floating point values often trigger special-case fallback paths in hardware and cause extremely bad performance problems (CPU's becoming hundreds of times slower when fed denormal values is not unheard of!!!). So.... it's also very common for hardware to have a "flush denormal to zero" mode that detects any of these values are replaces it with 0.0f instead.

It could be that the AMD driver has enabled the "flush denormal to zero" flag on your CPU, so that this is happening to your automatically when trying to write those floats... or perhaps their GPUs have "flush denormal to zero" behavior internally.

To avoid the issue, I'd use a 32bit integer texture to store your mixture of bitmasks and floats, instead of a 32bit float texture.

Share this post


Link to post
Share on other sites

@Hodgman Thank you, that seems comprehensible. I'll give the 32bit integer texture a try. 

@_WeirdCat_I'll provide more context then. The LUT contains additional material information for deferred material processing / evaluation. Our gbuffer stores the topmost material id that is applied on a given pixel. With this id we can make a lookup on the LUT without having to upload each and every material texture data for a later pass. Such information are boolean values, such as direct diffuse/specular light should be applied on the material, or if we should apply the environment map on the material when the indirect light pass is beeing rendered. These boolean values can be stored in a bitmask to save space. But we also have floating point values stored such as a material dependent exposure value for ssr and many other. 

This LUT gives as variable length information (dependent on the number of materials used / applied on the scene) about the material. What are other possibilities to encode/push this information to the GPU in OpenGL?  UBO's need to be fixed in length, unless we recompile the shader for every new/deleted material, this is not an option. SSBOs?

Share this post


Link to post
Share on other sites

Changing the texture format from rgba32f to rgba32i fixed the problem, thank you guys!

I'll update the code in the repository accordingly, but the problem with uploading bitmasks on floating point textures on AMD still applies, and imho it's a bit strange since this is working on nvidia cards. 

So using RGBA32I texture, I'd have to do the following: 

std::vector<int> lutInfos;
  
// generate rgba32i texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32I, width, height, 0, GL_RGBA_INTEGER, GL_INT, lutInfos.data());

// when generating the data, I just had to use other glsl functions to convert a float to an integer (floatToInt)

lutInfos.push_back(0b1001); // for bitmasks 
lutInfos.push_back(glm::floatBitsToInt(3.8f)); // for floating point values
  
// in the shader I had to change the sampler type to isampler (usampler, if you have generated an uint32 rgba buffer) 
uniform isampler2D lutTexture;

// and when we want to read a floating point value, we have to decode the integer value accordingly
float fvalue = intBitsToFloat(lutTextureSample.w); 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!