• Advertisement
Sign in to follow this  

OpenGL ComputeShader Performance / Crashes

This topic is 723 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Made a (looong) GLSL ComputeShader for Tiled-Deferred rendering. On my laptop with a 2013 nVidia graphics card, it works fine. But now I'm sending the program to some other guys. And as you know, that's always where the headaches start smile.png Can't debug or whatsoever, only guess. I need your experience or guessing-powers to give me some directions!

 

 

Guy1 had a nVidia card. Not too old, certainly not new either. Video card driver hanged / crashed when running this particular ComputeShader. Disabling loops "fixed" it:

requires OpenGL 430
#define TILE_SIZE 32
layout (local_size_x = TILE_SIZE, local_size_y = TILE_SIZE) in;
 
shared uint _indxLightsPoint[ MAXLIST_LIGHTS_POINT ];   // Found PointLights, indexes to UBO lightArray
 
...
// 1. Let each pixel inside a tile check ONE pointlight, see if it intersects tile-frustum. Ifso, add to a shared list
uint thrID = gl_LocalInvocationID.x + gl_LocalInvocationID.y * TILE_SIZE;  // Each tasks gets a number (0,1,2, ... 1023)
 
if ( thrID < counts1.x ) { // "count1" comes from a UBO parameter. Would be "2", if there were 2 active lights in the scene
   if ( pointLightIntersects( tileFrustum, lightPoint[ thrID ].posRange ) ) {
      // Add lightIndex to list
      uint index = atomicAdd( _cntLightsPoint, 1 );
      if ( index < MAXLIST_LIGHTS_POINT ) 
           _indxLightsPoint[ index ] = thrID;
   }
}
 
...
barrier();
...
 
// 2. Loop through the lights we found
 
for (uint i=0; i < _cntLightsPoint; i++) {
    uint index = _indxLightsPoint[i];
    addPointLight( brdf, surf, lightPoint[index] );
} // for

Compiles, starts, hangs the video-driver. If I simplify all this code to a fixed " addPointLight( ... lightPoint[ 0 ] )", it works. And a damn lot faster as well (even though I only had 1 or 2 lights in the scene anyway). If I re-enable "barrier" or some of the atomic operations, the FPS crumbles again. My first thought was that the "FOR LOOP" went crazy, counting to an extreme high number. But even if I put a hard-coded number here, it still crashes. The other suspect might be an out-of-range array read, but I can't see how.

 

Could it be that "older" cards (2010..2012) have issues with (GLSL) Barriers or Atomic operations? Or maybe the hard-coded Tilesize (32x32) is too big? Although I would expect a compiler crash in that case.

 

Guy1 now has a new AMD card. But it seems it doesn't support some OpenGL 4.5.0 features (though all shaders use 430). Got stranded after that.

 

 

 

Guy2 had a 2011 nVidia card, don't know what exactly. Everything works, but graphics seem more blurry (anisotropic / mipmapping settings?). Moroever, framerate is horrible. Mines is ~50 .. 60 FPS at a larger resolution, his is 5. I expected a drop, but not that much. As usual there could be a billio things wrong, but my main suspects are:

 

- ComputeShader setup (tilesize 32x32 too big)

- ComputeShader operations (atomicAdd / atomicMin / atomicMax / FOR LOOP / Barrier )

- I assume 24+ texture units are available ( ie "layout(binding=20) uniform sampler2D gBufferXYZ;" ). I know older cards only have 16 or so. But again I would expect a crash then.

- Not using glMemoryBarrier( GL_ALL_BARRIER_BITS ); (properly), prior or after calling the CS

 

 

My guts say to replace the ComputeShader with good old Fragment shaders and such. Then again it just works well on my own computer. And since its quite a job to change, it would suck if something very different turns out to be the party-crasher.

 

Ciao!

Edited by spek

Share this post


Link to post
Share on other sites
Advertisement
You modify shared memory (_indxLightsPoint[ index ] = thrID),
you do a barrier, but you forget to do a memory marrier on shared memory as well.
You read shared memory (index = _indxLightsPoint), but it is not guaranteed that all threads see the expected thrID.

Maybe that's it. I'd not give up so soon because you have no shared memory in fragment shaders.
Personally i gave up on OpenGL compute shader because OpenCL was two times faster on Nvidia ans slightly faster on AMD 1-2 years ago.

Edit:
For me it was absolutely necessery to stop the compiler from unrolling loops (forgot the command)
The compiler did not bother to unroll loops with > 1000 iterations smile.png

- ComputeShader setup (tilesize 32x32 too big)


It's always worth to try out, different hardware, differnt results. I'd assume 8*8 or 16*16 is better than 32*32.
On OpenCL the maximum for ATI is 512, but OpenGL spec requires a minimum of 1024, so i guess it's a slowdown for ATI to sync 1024 threads.
The hardware minimum for ATI is 64, NV 32. So in practice choose 64, 128 and 256 depending mostly on register usage. Edited by JoeJ

Share this post


Link to post
Share on other sites

Thanks for taking time to wrestle through my code pieces Joe!

 

>> you forget to do a memory barrier on shared memory as well.

All right. Adding "memoryBarrierShared()" in addition to "barrier()" would do the job (to ensure the index-array is done filling before starting the second half)?

 

Btw, besides crashes, is it possible that bad/lacking usage of the barrier as suggested can cause such a huge slowdown? Like I said, on my computer all seems fine, another one works as expected as well, but just very slow.

 

 

>> because OpenCL was two times faster on Nvidia ans slightly faster on AMD 1-2 years ago

Now that concerns me. Especially because I used OpenCL before, removed it completely from the engine, and swapped it for OpenCL (easier integration, more consistency)...  Doh!

 

Is it safe to assume that modern/future cards will overcome these performance issues? Otherwise I can turn my Deferred Rendering approach back to an "old" additive style. Anyone experience if Tiled Difference Rendering is that much of a win? And then I'm talking about indoor scenes which have relative much lights, but certainly not hundreds or thousands.

 

The crappy part is that I'm adapting code to support older cards now, even though I'm far away from a release, so maybe I shouldn't put too much energy on that and bet on future hardware.

 

 

>> Unroll

I suppose that can't happen if the size isn't hardcoded (counts.x comes from an outside (CPU) variable)?

 

 

Well, let's try the shared-barrier, different workgroup size, and avoiding unrolling. And see if these video-cards start smiling... But I'm afraid not hehe.

Share this post


Link to post
Share on other sites

>> you forget to do a memory barrier on shared memory as well.
All right. Adding "memoryBarrierShared()" in addition to "barrier()" would do the job (to ensure the index-array is done filling before starting the second half)?
Btw, besides crashes, is it possible that bad/lacking usage of the barrier as suggested can cause such a huge slowdown? Like I said, on my computer all seems fine, another one works as expected as well, but just very slow.


Yes, barrier syncs only the codeflow, so you need the memory barriers to ensure all writes are done as well.
This could cause e.g. a random huge number of lights and cause slow down / locking driver (but this seems not possible for your code).

On the CPU side, when you need te be sure a shader is done, the only thing worked for me was using a Fence.
glMemoryBarrier() or similar alone was not enough. I've had the feeling this was an Nvidia driver bug.

On AMD there was the problem that i had to remove all deprecated gl functions (like glVertex).
Otherwise compute shader produced wrong results. Checking GL errors helped to find those functions.



Is it safe to assume that modern/future cards will overcome these performance issues? Otherwise I can turn my Deferred Rendering approach back to an "old" additive style. Anyone experience if Tiled Difference Rendering is that much of a win?


I don't know if NV has improved their drivers. But they do support CL 1.2 now, with better OpenGL sharing.
I'd give it a try again to compare performance and troubles.
And looking at what 2.0 can do: GPU controls itself without those costly CPU <-> transfers just to get a number and start another kernel... that's exactly what we need.
(I don't know how DX12 or Vulkan can / will compete here)

Compute is worth it if you have an algorithm that can be made to profit from shared memory.
If you read the same stuff from global memory more than once, you can cache that data in shared memory.
You can build acceleration structures in shared memory to avoid typical fragment shader brute force crap. etc...
Just bang your head against the wall long enough until you get an idea how to make use of it smile.png

For the Unroll you're right. (Even for small loops disabling uroll is a win sometimes)

Extremely helpfull is a GPU profiler - ("Nsight"?)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement