Sign in to follow this  
masterbubu

Gpu skinning and VBO

Recommended Posts

Hi,

 

I have used for some time a gpu skeleton system that preforms well with vertex arrays.

 

I decided to try use VBOs, with the same setup as I have now,and just replace the vertex arrays with VBOs. ( vertices, weights, indices ).

 

For some reason the I get bad deformations. 

 

However, if I keep the vertices data on vbo, and the weights/indices at vertex arrays, the renders looks fine.

 

I'm asking for some direction to explore, assuming that the weights and the indices are stored properly on the VBOs,

 

is the data on VA stored differently then VBO? I'm just trying to figure what can cause the problem.

 

tnx smile.png

 

 

 

Share this post


Link to post
Share on other sites

I'm asking for some direction to explore, assuming that the weights and the indices are stored properly on the VBOs,

I really would double check your assumption first, a little bug while building/using the VBOs ? Can you post some code ?

Share this post


Link to post
Share on other sites

Having done GPU skinning with VBOs, I can tell you it works.

 

Getting the offsets right in the OpenCL kernel (which is what I'm using, you could also could instead reasonably be using the transform feedback buffer; you didn't say) was somewhat tricky. As requested above, post some code.

Share this post


Link to post
Share on other sites

Hi, I cut some parts of the code that relevant to the issue.

 

I still did not manage to spot the problematic area.

 

Please note that the same code and shaders works for VA, but not for VBO.

struct sBoneData
{
	vec4 vWeights,
	vIndices;
};

Vertex Arrays

	glVertexAttribPointer( iPositionAttribLoc, 3, GL_FLOAT, GL_FALSE,  sizeof( vec3 ), &pGeo->vVertices[0].x );
    glEnableVertexAttribArray( iPositionAttribLoc );

	glVertexAttribPointer( iBoneWeightsAttribLoc, 4, GL_FLOAT, GL_FALSE, sizeof( sBoneData ), &pGeo->vBonesData[0].vWeights.x );
	glEnableVertexAttribArray( iBoneWeightsAttribLoc );
	           
	glVertexAttribPointer( iBoneIndexesAttribLoc, 4, GL_FLOAT, GL_FALSE, sizeof( sBoneData ), &pGeo->vBonesData[0].vIndices.x );
	glEnableVertexAttribArray( iBoneIndexesAttribLoc );
	

	unsigned int *ptr = &pGeo->vFaces[ m_pObj->iFacesStartPos ].uIndex[ 0 ];
	unsigned int  sz = (unsigned int)( m_pObj->iFacesEndPos - m_pObj->iFacesStartPos ) * 3;
	glDrawElements( GL_TRIANGLES, m_uNumElements, GL_UNSIGNED_INT, 0 );

VBO

Generate ...
{
	
	glGenBuffers(3, m_Buffers);

	glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[0]);
	
	size_t size = sizeof(vec3) * pGeo->vVertices.size(); 

	glBufferData(GL_ARRAY_BUFFER, size ,0 , GL_STATIC_DRAW);

	size_t startOff = 0,
		   currSize = sizeof(vec3) *pGeo->vVertices.size();
	
	m_VertexOffset = startOff;

	glBufferSubData(GL_ARRAY_BUFFER, startOff, currSize, &pGeo->vVertices[0].x );

	glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[1]);

	// Indices and weights
	size_t size2 = sizeof(vec4) *pGeo->vBonesData.size() * 2; 

	glBufferData(GL_ARRAY_BUFFER, size2 ,0 , GL_STATIC_DRAW);

	startOff = 0;
	currSize = sizeof(vec4) *pGeo->vBonesData.size();
	m_BonesWeightsOffset = startOff;

	glBufferSubData(GL_ARRAY_BUFFER, startOff, currSize, &pGeo->vBonesData[0].vWeights.x );

	startOff += currSize;
	currSize = sizeof(vec4) *pGeo->vBonesData.size();
	m_BonesIndicesOffset = startOff;

	glBufferSubData(GL_ARRAY_BUFFER, startOff,currSize,	&pGeo->vBonesData[0].vIndices.x  );

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Buffers[2]);

	unsigned int *ptr = &pGeo->vFaces[ m_pObj->iFacesStartPos ].uIndex[ 0 ];
	unsigned int  sz = (unsigned int)( m_pObj->iFacesEndPos - m_pObj->iFacesStartPos ) * 3;
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GL_UNSIGNED_INT) * sz,ptr, GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, 0);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);


Bind ...
{
	glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[0]);
		
	if ( handlers[0] != -1 && m_VertexOffset != -1)
	{
		//glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[0]);
		glEnableVertexAttribArray( handlers[0] );
		glVertexAttribPointer( handlers[0] ,3,GL_FLOAT, GL_FALSE,  sizeof( vec3 ), reinterpret_cast<void*>( m_VertexOffset ));
	}
	{
		
	glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[1]);

	if ( handlers[5] != -1 &&  m_BonesWeightsOffset != -1 )
	{
		glVertexAttribPointer( handlers[1], 4, GL_FLOAT, GL_FALSE, sizeof( sBoneData ), reinterpret_cast<void*>(m_BonesWeightsOffset) );
		glEnableVertexAttribArray( handlers[1] );
	}
	if ( handlers[6] != -1 &&  m_BonesIndicesOffset != -1 )
	{
		glVertexAttribPointer( handlers[2], 4, GL_FLOAT, GL_FALSE, sizeof( sBoneData ), reinterpret_cast<void*>(m_BonesIndicesOffset) );
		glEnableVertexAttribArray( handlers[2] );
	}
      
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Buffers[2]);
}

Draw ...
{
	glDrawElements( GL_TRIANGLES, m_uNumElements, GL_UNSIGNED_INT, 0 );
}

UnBind ...

	for ( int i = 0 ; i < size ; ++i )
		if ( handlers[i] != -1 )
			glDisableVertexAttribArray( handlers[i] );

	glBindBuffer(GL_ARRAY_BUFFER, 0);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
	
	

 

 

Share this post


Link to post
Share on other sites

I got it working on GL ES 2 with VBOs, so it's definitely possible.

 

I'm not really seeing anything wrong with the CPU side code.  I guess make double sure all offsets, strides, and sizes are correct.  Make sure inside the shader the float bone indices are cast to int.

 

I'm not 100% sure since I didn't look at the generating code too carefully, but I'd recommend using interleaved VBOs where each vertex in the VBO is like (position, normal, texCoord, boneWeights, boneIndices) rather than having a block that's (position, position, position, ... texCoord, texCoord, texCoord... etc...) since this should improve cache locality when accessing VBO data in the shader.

 

And it looks like you were doing something with ELEMENT_ARRAY_BUFFER but not calling glDrawRangeElements.  Maybe you have some index buffer object but aren't using it and are instead drawing "garbage" by just drawing the straight VBO as is.  

Edited by ill

Share this post


Link to post
Share on other sites

And it looks like you were doing something with ELEMENT_ARRAY_BUFFER but not calling glDrawRangeElements.  

 

The OP is using glDrawElements:

glDrawElements( GL_TRIANGLES, m_uNumElements, GL_UNSIGNED_INT, 0 );


One limitation that comes to mind - particularly if using GL ES - is that you may not support 32-bit indices in hardware.  True, the worst this should so is drop you back to software emulation in the vertex pipeline, but that's assuming a good conformant driver.  May be worth switching to GL_UNSIGNED_SHORT with 16-bit indices all the same.

Share this post


Link to post
Share on other sites

Oh that's right, I'm so used to seeing glDrawRangeElements that I automatically assumed he wasn't using IBO when I saw that call.

 

Yeah another thing on hardware is indices have to be 16 bit.  I had a horrible mistake where I did make it use 16 bit indices since it would otherwise result in INVALID_ENUM but I didn't make the array itself be an array of 16 bit indices.  The PC version of my engine was using 32 bit indices and I didn't change my mesh loading code to account for that at the time and spent about 3 days wondering why my meshes weren't rendering right.  So be sure it's an array of 16 bit indices CPU side before uploading it as well or the data will be complete garbage.

 

Now both my PC and mobile engines use 16 bit indices and it's all good.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this