• Advertisement
Sign in to follow this  

OpenGL Black screen when sampling OpenGL texture on AMD graphics cards

This topic is 456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My program use QtOpenGL to draw a sphere and color it by sampling a texture in a single draw call. It works on Nvdia cards, but when I switch to AMD cards (my laptop and other laptops), it show a black screen. (Note: it only failed with AMD Catalyst Software Suite driver, but works with the AMD Radeon Software Crimson Edition Beta driver at this link).
Here is the normal picture on Nvdia cards, and the black bug picture on AMD cards.

 

NVdia

Mcs77.png

 

AMD

utMHM.png
 
It seems to be a texture sampling bug (not framebuffer bug) because OpenGL draws normally when I use a simple shading method (color = dot(vertexPixelNormal, lightDirection)) as the following picture.

nvJcN.png
 
I use CodeXL from AMD for debugging, and when I click on the texture ID from the CodeXL property view, it show exactly my image (Does it mean that my image is updated to GPU successfully?). Here is the OpenGL calls log.
Note: You can't see the function glTextureStorage2DEXT before glTextureSubImage2D in the log because CodeXL doesn't log glTextureStorage2DEXT, which is used by QtOpenGL. I debug step by step and ensure that this function is called.

kdzPK.jpg
 Here is the texture property from CodeXL property view

6evHf.jpg 
Here is the fragment shader

#version 150 core

uniform sampler2D matcapTexture;

vec3 matcapColor(vec3 eye, vec3 normal) 
{
  vec3 reflected = reflect(eye, normal);

  float m = 2.0 * sqrt(
    pow(reflected.x, 2.0) +
    pow(reflected.y, 2.0) +
    pow(reflected.z + 1.0, 2.0)
  );
  vec2 uv =  reflected.xy / m + 0.5;
  uv.y = 1.0 - uv.y;
  return texture(matcapTexture, uv).xyz;
}

in vec4 fragInput;  //vec4(eyePosition, depth)

void main()
{
   vec3 n   = normalize(cross(dFdx(fragInput.xyz), dFdy(fragInput.xyz)));                    //calculate vertex pixel normal by dfdx,dfdy
  const vec3 LightDirection = vec3(0.0f, 0.0f, -1.0f); 
  vec3 fragColor = matcapColor(LightDirection, n); 
  gl_FragData[0]    = vec4(fragColor.x, fragColor.y, fragColor.z, 1.0f);
}

I spent several days for this bug but can't find any clues. Hope you guys could help me show what's incorrect here. Did I do something that AMD didn't expect?

Share this post


Link to post
Share on other sites
Advertisement

Haven't looked in detail, but have you checked it is generating mipmaps? If you upload the texture and no mipmaps are generated, it will show the texture as black if it tries to use them.

Share this post


Link to post
Share on other sites

@

lawnjelly: thanks for your answer. the flag GL_GENERATE_MIPMAP is false when I use CodeXL to debug, and I just use the basic glsl function texture() for retrieving texels.

Share this post


Link to post
Share on other sites

Have you ran CodeXL / gDEBugger or similar on the laptop where the sphere is coming out black? That might help pin down the problem .. I am assuming here that your codeXL screenshot is from your development machine where it is working...

 

If it isn't the mipmaps then as you suggest it may be an issue with the texture getting created at all.. I would maybe try calling glTexImage2D on first create instead of glTextureStorage2DEXT, just in case the latter isn't working.. but really getting codeXL on the laptop is better to do first otherwise we are just guessing...

 

Also it is notable the warning about the requested texture pixel format. Maybe this is the problem too, your development machine could be just substituting another, and the test machine is borking.

 

Maybe someone with more OpenGL knowledge can answer, I find these things difficult to debug too!  :lol: 

Edited by lawnjelly

Share this post


Link to post
Share on other sites

khanhhh89, what are settings for GL_TEXTURE_MIN_FILTER?

Do you call glTexParameteri() for this parameter?

Because by default minimizing filter set to GL_NEAREST_MIPMAP_LINEAR, means mipmapping is enabled, so texture must have complete mip pyramid, or GL_TEXTURE_MAX_LEVEL set accordingly to currently available mips count.

Share this post


Link to post
Share on other sites

lawnjelly:

- I run CodeXL on the laptop which cause the problem, and I see the texture shown on the CodeXL exproler window. it's exactly the image I expect.

 

- For glTextureStorage2DEXT, I don't know how to switch to glTextureImage2D because I use QOpenGLTexture for binding, loading texture. But do you think it relate to the problem because glTextureImage2D is used to upload image to GPU, and I see the image in GPU.

 

- For the warning from CodeXL about the requested format, I did try to investigate it by stepping into QOpenGLTexture. I see that the function glTextureStorage2DEXT use the internal format GL_RGBA8, and the function glTexturesubImage2D use the format GL_RGBA and the type GL_UNSIGNED_BYTE. So I don't see any conflicts between these two functions. Don't know why CodeXL show this warning. Could you give me some other clues? 


vstrakh: thanks for your suggestion. I checked GL_TEXTURE_MIN_FILTER again, and it is set to GL_LINEAR, like GL_TEXTURE_MAX_FILTER. You can see that in the OpenGL call log and the texture property from CodeXL I attached above. Do I need to set GL_TEXTURE_MAX_LEVEL?  could you help with some more clues? 

Edited by khanhhh89

Share this post


Link to post
Share on other sites
You can see that in the OpenGL call log and the texture property from CodeXL I attached above

 

I see you're calling glTextureParameteri(), while passing GL_TEXTURE_2D as first argument. That's totally wrong.

First of all, glTextureParameter() is extension for anything below GL 4.5, so that call might even crash on some systems.

Second - if you really need glTextureParameteri(), then pass texture id as first argument, not the texture target GL_TEXTURE_2D.

If you don't care about glTextureParameteri() details, better call glTexParameteri() when texture is bound to GL_TEXTURE_2D. This function is in the core since the very beginning.

Edited by vstrakh

Share this post


Link to post
Share on other sites

If it doesn't turn out to be glTextureParameteri as vstrakh suggests ...

If the texture is fine on the laptop and it is not using black mipmaps...

 

Then personally I'd next double check that the texture was returning black, and there wasn't a problem in your shaders:

gl_FragColor = vec4(texture2D (matcapTexture, uv).rgb, 1.0);

Or possibly fixing in some UVs as you are calculating them too:

gl_FragColor = vec4(texture2D (matcapTexture, vec2(0.5, 0.5)).rgb, 1.0);

If that is returning black and the texture is white, there must be something else going on (some other test maybe?)

 

I am also assuming that you are checking OpenGL for errors regularly (QT probably does this for you) with glGetError(). Aside from this I am running out of ideas lol..  :D

Share this post


Link to post
Share on other sites

Try changing:

float m = 2.0 * sqrt(
pow(reflected.x, 2.0) +
pow(reflected.y, 2.0) +
pow(reflected.z + 1.0, 2.0)
);

to:

float m = 2.0 * sqrt(reflected.x * reflected.x + reflected.y * reflected.y + ((reflected.z + 1.0) *  (reflected.z + 1.0)));

The pow function is undefined when the first parameter is negative. Compilers don't always convert pow(x, 2.0) to (x * x).

 

Share this post


Link to post
Share on other sites

vstrakhI see you're calling glTextureParameteri(), while passing GL_TEXTURE_2D as first argument. That's totally wrong.

Is it wrong? I do bind texture before calling glTextureParameteri. I use the function QOpenGLTexture::setMinMagFilters, which automatically binds the texure ID to GL_TEXTURE_2D

 


lawnjelly: I do the test using gl_FragColor = vec4(texture2D (matcapTexture, vec2(0.5, 0.5)).rgb, 1.0); it stil show the black screen. And I also put breakpoints in CodeXL when there's some OpenGL bugs, and it doesn't break anywhere. Could you suggest me some other tests?
JohnnyCode: I set up texture binding use three functions glActiveTexture(GL_TEXTURE0) ==> glBindTexture(GL_TEXTURE_2D) ==> glUniformi(3,0), where 3 is the location of texture sampler in shader program. you can see more details in the attached iamge.

 

C0lumbo: thank for your advice. I didn't know that pow need the base parameter to be postive. But when I changed the code as you said, it sill show black screen. Idon't think it's the problem because I tried to use the fixed texture coordinates for sampling the texutre earlier. Edited by khanhhh89

Share this post


Link to post
Share on other sites
Is it wrong? I do bind texture before calling glTextureParameteri. I use the function QOpenGLTexture::setMinMagFilters

 

Mmm... sorry, probably I was misled by something. Why I even thought it wasn't texture id?..

Edited by vstrakh

Share this post


Link to post
Share on other sites

How do you bind your texture before draw call?
How do you setup the sampler's uniform variable?

 I think johnnycode likely has the right area.
To summarise : the texture is created ok and white and is just coming up as black when rendered with a simple shader, and hard coding the colour in the shader instead of accessing the texture works...

 

This does seem to suggest it is likely something simple with the shader setup (is the sampler location hard coded for example? afaik there should be a glGetUniformLocation as this may change?). Maybe just some example 'hello world' rendering a texture example code on the test machine. If it works, then go through the example code and your code, and make sure you are doing all the necessary stages, perhaps turning off everything else in your code apart from the minimum necessary.

 

It does sound like one of those difficult problems to debug, but is a good learning tool for this process of deduction and a binary search to try and narrow down the possibilities. It is kind of fun!  :D And you will kick yourself when you find the answer... lol

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement