Jump to content
  • Advertisement

Recommended Posts

I was confused by OpenGL glMemoryBarrier API usage, did not know when should use it, and use which barrier bit. I list some cases here, please see whether they use glMemoryBarrier correctly, thank you.

case 1: In compute shader, use "imageStore" to write some content to "texture", then bind "texture" to a fbo object, use glReadPixels to read "texture" content. imageStore is incoherent memory access, so need to use glMemoryBarrier, because glReadPixel is an operation to read fbo attachment, use GL_FRAMEBUFFER_BARRIER_BIT to ensure visibility of "imageStore" operation, is it right?
...
glBindImageTexture(0, texture, 0, GL_WRITE_ONLY, GLR32UI);
glDispatchCompute();
glMemoryBarrier(GL_FRAMEBUFFER_BARRIER_BIT);
...
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);

case 2: In compute shader, read "texture[0]" content and write to "texture[1]" in the first glDispatchCompute, then read "texture[1]" content and write to "texture[2]" in the second glDispatchCompute. Is it neccessary to use glMemoryBarrier between two glDispatchComputes, and is GL_SHADER_IMAGE_ACCESS_BARRIER_BIT correct here?
...
glBindImageTexture(0, texture[0], 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32UI);
glBindImageTexture(1, texture[1], 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32UI);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
glBindImageTexture(0, texture[1], 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32UI);
glBindImageTexture(1, texture[2], 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32UI);
glDispatchCompute(1, 1, 1);
...

case 3: Mixed with compute pipeline and graphics pipeline. Write some content to "texture" in compute pipeline firstly, then sample "texture" content in graphics pipeline and draw to framebuffer. Is it neccessary to use glMemoryBarrier between glDispatchCompute and glDrawArrays, and is GL_TEXTURE_FETCH_BARRIER_BIT correct here?
...
glBindTexture(GL_TEXTURE_2D, texture);
...
glUseProgram(csProgram);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT);
...
glUseProgram(program);
...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

case 4: In compute shader, increment atomic counter varibles, and then use glMapBufferRange to map this buffer, read back to CPU. Is atomic counter incoherent memory access operation? Is it neccessary to use glMemoryBarrier between glDispatchCompute and glMapBufferRange, and is GL_BUFFER_UPDATE_BARRIER_BIT correct here?
GLBuffer atomicCounterBuffer;
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, atomicCounterBuffer);
...

glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, atomicCounterBuffer);

glDispatchCompute(1, 1, 1);

glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);

glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, atomicCounterBuffer);
void *mappedBuffer =
    glMapBufferRange(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint) * 3, GL_MAP_READ_BIT);
memcpy(bufferData, mappedBuffer, sizeof(bufferData));
glUnmapBuffer(GL_ATOMIC_COUNTER_BUFFER);

Share this post


Link to post
Share on other sites
Advertisement

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!