OpenGL - layout(shared) ?

Started by
6 comments, last by Althar 9 years, 5 months ago

Hello all,

I have a quick question for you guys as I have been toying with OpenGL recently and trying to figure out how the layout(shared) works? My understanding was that OpenGL would assign the same block offset for all shader proigrams using the same uniform block when declared with layout(shared).

For instance, I have a shared uniform block:


layout(shared) uniform ubContext 
{
	mat4 	projectionViewMatrix;	
	mat4	projectionMatrix;
	mat4	viewMatrix;
	float	time;
	float   cosTime;
} Context;

What I am trying to do is share it across multiple programs. Essentially, I tried creating a single UBO from the block, fill it with data once at the very beginning of a frame, bind it and expect all the shader programs using it to able to read from it. I

This does not seem to work and I am currently having to create & bind one UBO per shader program.

Shouldn't the data be persistent across several draw calls ? Do I need to explicitly assign the block indices myself (I thought shared would do it "automatically")?

Am I missing something there or is this not how the shared layout works?

Thanks in advance,

Advertisement
Try to use the std140 layout instead and see if it works or not.

When using the shared layout, you have to query OpenGL for the offsets of each element.

For example there could be hidden padding for alignment reasons. This padding depends on each GL driver. It could even change between driver versions.

std140 layout avoids the need to query the offsets, as there are rules on how the memory is layed out.

But it has very conservative memory layout rules so that it works on every possible hardware, and these rules are insane. Most importantly they're so hard to follow many driver implementations get it wrong (i.e. a vec3 always gets upgraded to a vec4, and four aligned consecutive floats should be contiguous, but I've seen drivers wrongly promote each float to a vec4).

I prefer std140 because it avoids querying and the offsets are known beforehand, while my GLSL declarations are all of type vec4 or mat4 and use #define macros if the name of a variable is too important for readability. Example:


layout(std140) uniform ubContext 
{
	mat4 	projectionViewMatrix;	
	mat4	projectionMatrix;
	mat4	viewMatrix;
        vec4    times; //times.z and times.w are not used (padding)
} Context;

#define currentTime Context.times.x
#define cosTime Context.times.y

This way you don't need to query, the offsets are known beforehand, and you're safe from most driver implementation problems.

Cheers

Matias

Thanks a lot for your replies guys,

I have done a bit of debugging/troubleshooting, and from the looks of it, std140 & shared behave in similar ways in my case : ie. std140 ensure a consistent layout across programs so can be used to be shared between shader programs (which is supported by the various websites and references I have read).

I believe the issue is that for some reason my binding points seem to be reset (glBindBufferBase) every frame. I am using SFML for a bunch of reasons (networking, window management, input & audio) and have had issues in the past to do with VAO for which the compiled states were being reset as well.

My UBO creation code is the following :


GLuint bindingPoint = 0;

// Fixup binding point
glUniformBlockBinding( gl_shaderProgram->getProgramId(), uniformBlockIndex, bindingPoint );
		
GLuint uniformBufferId = 0;

// Generate the OpenGL uniform buffer
glGenBuffers(1, &uniformBufferId);
glBindBuffer(GL_UNIFORM_BUFFER, uniformBufferId);		

// Copy data over (if any
glBufferData(GL_UNIFORM_BUFFER, blockSize, data, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_UNIFORM_BUFFER, bindingPoint, uniformBufferId );

And my binding call is :


// Bind the OpenGL uniform buffer
glBindBuffer( GL_UNIFORM_BUFFER, gl_uniformBuffer->getBufferId() );

I believe the binding point is meant to be compiled into the UBO, so I should only need to call glBindBuffer - what I am finding currently is that I have to call glBindBufferBase again for the contents of the UBO to be propagated to both shader programs.

Are you querying the 'uniformBlockIndex' for all the programs your UBO is going to be on? What happens if you call glUniformBlockBinding after glBindBufferBase? ie, first attach UBO to binding slot, then bind it to the program's index for that UBO.

To be honest, I'm fuzzy on the specifics since I moved early one to fixed binding slots in the shader with arb_shading_language_420pack, which is widely supported.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Thanks for the reply,

The uniformBlockIndex is queried for each shader program that uses the uniform block - as a result, glUniformBlockBinding should be binding the correct uniform block index to the desired binding point (binding point 0 in my case). I tried changing the order of things, but it does not help.

From what I have seen, the uniform block binding is correct and remains un-changed : from my understanding, glUniformBlockBinding does not depend on any UBO bound at the time anyway.

The issue seems to be that the UBO does not retain its binding point (glBindBufferBase) which I assigned at creation time. I would expect it to "memorize" it for the next time round when I go about binding the UBO with glBindBuffer.

This is not the case and I am having to explicitly rebind it to the desired binding point every frame. I read about people having issues with AMD cards which exhibited a similar behaviour to mine; unfortunately, I have an NVidia card, so it is unlikely to be the same problem.

In the meantime, I have tried posting on the OpenGL forums to see if anyone had an idea of how this is all meant to work.

To be honest, I'm fuzzy on the specifics since I moved early one to fixed binding slots in the shader with arb_shading_language_420pack, which is widely supported.

This is my plan long-term (already what I am doing with samplers) but would not help with my issue as it is the UBO which is not pointing at to the correct binding point - as far as I am aware, the shader program is already correctly mapped to it.

Ok,

Finally got to the bottom of this thanks to a guy on the Open GL forums. So for anyone else like me scratching their heads trying to figure out how UBO binding works, here it goes.

Both glBindBuffer & glBindBufferBase 'bind' a UBO - the difference lies in the fact that :

  • glBindBuffer : has no binding point - it is used mainly for mapping and unmapping the buffer and/or copying data across
  • glBindBufferBase / glBindBufferRange : has an explicit binding point which is required in order to link with the shader program uniform blocks - this is what you want to use when binding the UBO prior to rendering.

This topic is closed to new replies.

Advertisement