Advertisement Jump to content
Sign in to follow this  
2470765

glMapBufferRange problem

This topic is 1814 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi,
I'm having a problem mapping a uniform buffer (mPerObjectUbo) in my scene two times per frame. When I update it for the second time it doesn't take effect. I have found a partial fix using glFlush but it causes synchronization.

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
		
glm::mat4 projMatrix = mCam.GetProjMatrix();
glm::mat4 viewMatrix = mCam.GetViewMatrix();

// Update PerFrame UBO
glBindBuffer(GL_UNIFORM_BUFFER, mPerFrameUbo);
void * pData = glMapBufferRange(GL_UNIFORM_BUFFER,
								0, sizeof(UbPerFrame),
								GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);

reinterpret_cast<UbPerFrame*>(pData)->gProjection = projMatrix;
reinterpret_cast<UbPerFrame*>(pData)->gDirLight = mDirLight;
reinterpret_cast<UbPerFrame*>(pData)->gPointLight = mPointLight;
reinterpret_cast<UbPerFrame*>(pData)->gSpotLight = mSpotLight;

glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBuffer(GL_UNIFORM_BUFFER, 0);

glUseProgram(mShader.GetProgram());

// Draw the Land.
glBindBuffer(GL_UNIFORM_BUFFER, mPerObjectUbo);
pData = glMapBufferRange(GL_UNIFORM_BUFFER, 0, sizeof(UbPerObject),
						GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
	
reinterpret_cast<UbPerObject*>(pData)->gModelView = viewMatrix;
reinterpret_cast<UbPerObject*>(pData)->gModelViewInvTranspose =  glm::inverseTranspose(viewMatrix);
reinterpret_cast<UbPerObject*>(pData)->gMaterial = mLandMat;

glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBuffer(GL_UNIFORM_BUFFER, 0);

glBindVertexArray(mLandVao);
glDrawElements(GL_TRIANGLES, mGridIndexCount, GL_UNSIGNED_INT, nullptr);
//glFlush(); partial fix but it causes synchronization

// Draw the Waves.
glBindBuffer(GL_UNIFORM_BUFFER, mPerObjectUbo);
pData = glMapBufferRange(GL_UNIFORM_BUFFER, 0, sizeof(UbPerObject),
						GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);

// Does not take effect without glflush
reinterpret_cast<UbPerObject*>(pData)->gModelView = viewMatrix*mWavesWorld;
reinterpret_cast<UbPerObject*>(pData)->gModelViewInvTranspose = glm::inverseTranspose(viewMatrix*mWavesWorld);
reinterpret_cast<UbPerObject*>(pData)->gMaterial = mWavesMat;

glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBuffer(GL_UNIFORM_BUFFER, 0);

glBindVertexArray(mWavesVao);
glDrawElements(GL_TRIANGLES, 3 * mWaves.TriangleCount(), GL_UNSIGNED_INT, nullptr);

glBindVertexArray(0);
glUseProgram(0);

without glflush:

 

Untitled.png

 

with glflush:

 

Untitled1.png

 

 

Any suggestions will be appreciated, thanks.

Share this post


Link to post
Share on other sites
Advertisement

Did you check glGetError?

 

Is there a reason you can't just make the buffer twice as big and map the second half with the second glMapBufferRange? We're only talking about a couple matrices, right? So maybe like 200 bytes? I think you want the performance, but you're unlikely to get it if you want to write->use->write->use on the same block of memory in a short period of time. You're going to force a synch on that block of memory.

Share this post


Link to post
Share on other sites

Did you check glGetError?

 

yes, there is no error

 

 

I think you want the performance, but you're unlikely to get it if you want to write->use->write->use on the same block of memory in a short period of time. You're going to force a synch on that block of memory.

 

well, technically it's a different block of memory since I'm orphaning using GL_MAP_INVALIDATE_BUFFER_BIT so I don't know why it doesn't work.

 

I tried your solution and it worked but what if I have 200 objects for example, Will it still be a good solution?

GLint uniformBufferAlignSize = 0;
glGetIntegerv(GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT, &uniformBufferAlignSize);
mUbPerObjectSize = sizeof(UbPerObject);
mUbPerObjectSize += uniformBufferAlignSize - (mUbPerObjectSize % uniformBufferAlignSize);

glGenBuffers(1, &mPerObjectUbo);
glBindBuffer(GL_UNIFORM_BUFFER, mPerObjectUbo);
glBufferData(GL_UNIFORM_BUFFER, 2 * mUbPerObjectSize, nullptr, GL_STREAM_DRAW);
glBindBuffer(GL_UNIFORM_BUFFER, 0);
----------------------------------------------------------------------------------------------------------------------------
// Draw the Land.
glBindBuffer(GL_UNIFORM_BUFFER, mPerObjectUbo);
glBindBufferRange(GL_UNIFORM_BUFFER, 1, mPerObjectUbo,
					0 * mUbPerObjectSize, sizeof(UbPerObject));
pData = glMapBufferRange(GL_UNIFORM_BUFFER, 0, sizeof(UbPerObject),
							GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
	
//set data

glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBuffer(GL_UNIFORM_BUFFER, 0);

glBindVertexArray(mLandVao);
glDrawElements(GL_TRIANGLES, mGridIndexCount, GL_UNSIGNED_INT, nullptr);
		
// Draw the Waves.
glBindBuffer(GL_UNIFORM_BUFFER, mPerObjectUbo);
glBindBufferRange(GL_UNIFORM_BUFFER, 1, mPerObjectUbo,
					1 * mUbPerObjectSize, sizeof(UbPerObject));
pData = glMapBufferRange(GL_UNIFORM_BUFFER, mUbPerObjectSize, sizeof(UbPerObject),
							GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
	
// set data

glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBuffer(GL_UNIFORM_BUFFER, 0);

Furthermore, here I'm using GL_MAP_INVALIDATE_BUFFER_BIT too which discard the entire buffer so it's almost the same but this actually work

Share this post


Link to post
Share on other sites

It's great to hear the approach worked for you. It will still work for 200 objects. It will probably work long after you'd bottleneck on PCIe bandwidth. This is basically just applying a technique used with GL_MAP_UNSYNCHRONIZED_BIT to avoid spending forever in glClientWaitSync, so it's a pretty common thing.

 

If you happen to be using a beta driver, you might want to avoid doing that while developing since minor timing issues are more common in beta drivers. I don't know enough about the specific area to definitively say it is a driver bug or not, but it just feels like a timing issue in the driver.

Share this post


Link to post
Share on other sites


If you happen to be using a beta driver, you might want to avoid doing that while developing since minor timing issues are more common in beta drivers. I don't know enough about the specific area to definitively say it is a driver bug or not, but it just feels like a timing issue in the driver.

I'm not using beta drivers, I'm using the integrated gpu of my intel cpu (intel hd 4000), from what I've heard intel don't have good opengl drivers so maybe it could be the problem, I'll try to test it in another machine.

Thanks for your help.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!