• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By racarate
      Hey everybody!
      I am trying to replicate all these cool on-screen debug visuals I see in all the SIGGRAPH and GDC talks, but I really don't know where to start.  The only resource I know of is almost 16 years old:
      http://number-none.com/product/Interactive Profiling, Part 1/index.html
      Does anybody have a more up-to-date reference?  Do people use minimal UI libraries like Dear ImgGui?  Also, If I am profiling OpenGL ES 3.0 (which doesn't have timer queries) is there really anything I can do to measure performance GPU-wise?  Or should I just chart CPU-side frame time?  I feel like this is something people re-invent for every game there has gotta be a tutorial out there... right?
       
       
    • By Achivai
      Hey, I am semi-new to 3d-programming and I've hit a snag. I have one object, let's call it Object A. This object has a long int array of 3d xyz-positions stored in it's vbo as an instanced attribute. I am using these numbers to instance object A a couple of thousand times. So far so good. 
      Now I've hit a point where I want to remove one of these instances of object A while the game is running, but I'm not quite sure how to go about it. At first my thought was to update the instanced attribute of Object A and change the positions to some dummy number that I could catch in the vertex shader and then decide there whether to draw the instance of Object A or not, but I think that would be expensive to do while the game is running, considering that it might have to be done several times every frame in some cases. 
      I'm not sure how to proceed, anyone have any tips?
    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Black screen when sampling OpenGL texture on AMD graphics cards

This topic is 522 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My program use QtOpenGL to draw a sphere and color it by sampling a texture in a single draw call. It works on Nvdia cards, but when I switch to AMD cards (my laptop and other laptops), it show a black screen. (Note: it only failed with AMD Catalyst Software Suite driver, but works with the AMD Radeon Software Crimson Edition Beta driver at this link).
Here is the normal picture on Nvdia cards, and the black bug picture on AMD cards.

 

NVdia

Mcs77.png

 

AMD

utMHM.png
 
It seems to be a texture sampling bug (not framebuffer bug) because OpenGL draws normally when I use a simple shading method (color = dot(vertexPixelNormal, lightDirection)) as the following picture.

nvJcN.png
 
I use CodeXL from AMD for debugging, and when I click on the texture ID from the CodeXL property view, it show exactly my image (Does it mean that my image is updated to GPU successfully?). Here is the OpenGL calls log.
Note: You can't see the function glTextureStorage2DEXT before glTextureSubImage2D in the log because CodeXL doesn't log glTextureStorage2DEXT, which is used by QtOpenGL. I debug step by step and ensure that this function is called.

kdzPK.jpg
 Here is the texture property from CodeXL property view

6evHf.jpg 
Here is the fragment shader

#version 150 core

uniform sampler2D matcapTexture;

vec3 matcapColor(vec3 eye, vec3 normal) 
{
  vec3 reflected = reflect(eye, normal);

  float m = 2.0 * sqrt(
    pow(reflected.x, 2.0) +
    pow(reflected.y, 2.0) +
    pow(reflected.z + 1.0, 2.0)
  );
  vec2 uv =  reflected.xy / m + 0.5;
  uv.y = 1.0 - uv.y;
  return texture(matcapTexture, uv).xyz;
}

in vec4 fragInput;  //vec4(eyePosition, depth)

void main()
{
   vec3 n   = normalize(cross(dFdx(fragInput.xyz), dFdy(fragInput.xyz)));                    //calculate vertex pixel normal by dfdx,dfdy
  const vec3 LightDirection = vec3(0.0f, 0.0f, -1.0f); 
  vec3 fragColor = matcapColor(LightDirection, n); 
  gl_FragData[0]    = vec4(fragColor.x, fragColor.y, fragColor.z, 1.0f);
}

I spent several days for this bug but can't find any clues. Hope you guys could help me show what's incorrect here. Did I do something that AMD didn't expect?

Share this post


Link to post
Share on other sites
Advertisement

Haven't looked in detail, but have you checked it is generating mipmaps? If you upload the texture and no mipmaps are generated, it will show the texture as black if it tries to use them.

Share this post


Link to post
Share on other sites

Have you ran CodeXL / gDEBugger or similar on the laptop where the sphere is coming out black? That might help pin down the problem .. I am assuming here that your codeXL screenshot is from your development machine where it is working...

 

If it isn't the mipmaps then as you suggest it may be an issue with the texture getting created at all.. I would maybe try calling glTexImage2D on first create instead of glTextureStorage2DEXT, just in case the latter isn't working.. but really getting codeXL on the laptop is better to do first otherwise we are just guessing...

 

Also it is notable the warning about the requested texture pixel format. Maybe this is the problem too, your development machine could be just substituting another, and the test machine is borking.

 

Maybe someone with more OpenGL knowledge can answer, I find these things difficult to debug too!  :lol: 

Edited by lawnjelly

Share this post


Link to post
Share on other sites

khanhhh89, what are settings for GL_TEXTURE_MIN_FILTER?

Do you call glTexParameteri() for this parameter?

Because by default minimizing filter set to GL_NEAREST_MIPMAP_LINEAR, means mipmapping is enabled, so texture must have complete mip pyramid, or GL_TEXTURE_MAX_LEVEL set accordingly to currently available mips count.

Share this post


Link to post
Share on other sites

lawnjelly:

- I run CodeXL on the laptop which cause the problem, and I see the texture shown on the CodeXL exproler window. it's exactly the image I expect.

 

- For glTextureStorage2DEXT, I don't know how to switch to glTextureImage2D because I use QOpenGLTexture for binding, loading texture. But do you think it relate to the problem because glTextureImage2D is used to upload image to GPU, and I see the image in GPU.

 

- For the warning from CodeXL about the requested format, I did try to investigate it by stepping into QOpenGLTexture. I see that the function glTextureStorage2DEXT use the internal format GL_RGBA8, and the function glTexturesubImage2D use the format GL_RGBA and the type GL_UNSIGNED_BYTE. So I don't see any conflicts between these two functions. Don't know why CodeXL show this warning. Could you give me some other clues? 


vstrakh: thanks for your suggestion. I checked GL_TEXTURE_MIN_FILTER again, and it is set to GL_LINEAR, like GL_TEXTURE_MAX_FILTER. You can see that in the OpenGL call log and the texture property from CodeXL I attached above. Do I need to set GL_TEXTURE_MAX_LEVEL?  could you help with some more clues? 

Edited by khanhhh89

Share this post


Link to post
Share on other sites
You can see that in the OpenGL call log and the texture property from CodeXL I attached above

 

I see you're calling glTextureParameteri(), while passing GL_TEXTURE_2D as first argument. That's totally wrong.

First of all, glTextureParameter() is extension for anything below GL 4.5, so that call might even crash on some systems.

Second - if you really need glTextureParameteri(), then pass texture id as first argument, not the texture target GL_TEXTURE_2D.

If you don't care about glTextureParameteri() details, better call glTexParameteri() when texture is bound to GL_TEXTURE_2D. This function is in the core since the very beginning.

Edited by vstrakh

Share this post


Link to post
Share on other sites

If it doesn't turn out to be glTextureParameteri as vstrakh suggests ...

If the texture is fine on the laptop and it is not using black mipmaps...

 

Then personally I'd next double check that the texture was returning black, and there wasn't a problem in your shaders:

gl_FragColor = vec4(texture2D (matcapTexture, uv).rgb, 1.0);

Or possibly fixing in some UVs as you are calculating them too:

gl_FragColor = vec4(texture2D (matcapTexture, vec2(0.5, 0.5)).rgb, 1.0);

If that is returning black and the texture is white, there must be something else going on (some other test maybe?)

 

I am also assuming that you are checking OpenGL for errors regularly (QT probably does this for you) with glGetError(). Aside from this I am running out of ideas lol..  :D

Share this post


Link to post
Share on other sites

Try changing:

float m = 2.0 * sqrt(
pow(reflected.x, 2.0) +
pow(reflected.y, 2.0) +
pow(reflected.z + 1.0, 2.0)
);

to:

float m = 2.0 * sqrt(reflected.x * reflected.x + reflected.y * reflected.y + ((reflected.z + 1.0) *  (reflected.z + 1.0)));

The pow function is undefined when the first parameter is negative. Compilers don't always convert pow(x, 2.0) to (x * x).

 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement