Good design of a VertexBuffer class

Started by
7 comments, last by irreversible 11 years, 2 months ago

OK, so I have written my vertex buffer class how I thought it should be but it seems very messy for instance it needs the shader program id to know where the attributes are (this can only be a bad thing right, a vbo should be able to use multiple shaders easily) please can someone have a look and tell me what should be there and what shouldn't.

It also doesn't seem right that I'm having to come back in and call disableAttributes here after the draw call, there must be a cleaner way of doing things.

I think it's a case of not having clear responsibilities.

.h


#ifndef KVERTEXBUFFER_H
#define KVERTEXBUFFER_H

#include "KGraphicsLib.h"
#include "KShaderManager.h"

enum EFormatBitMask
{
    FORMAT_POSITION = (1u << 0),
    FORMAT_UV       = (1u << 1),
    FORMAT_NORMAL   = (1u << 2),
    FORMAT_TANGENT  = (1u << 3)
};

class CVertexBuffer
{
public:
    struct CAttrib
    {
        CAttrib(GLuint i, GLint s, int o)
        {
            index = i;
            size = s;
            offset = (GLvoid*)o;
        }
        GLuint index; // location that it's bound to
        GLint size; // component count (float[2] has size 2)
        GLvoid* offset;
    };

    
                CVertexBuffer();
    
    virtual     ~CVertexBuffer();

    void        init();
    void        destroy();

    void        uploadData(GLint programID, unsigned int format, float* data, int vertexCount);
    void        pointToData();
    void        disableAttributes();
private:
    GLuint identifier;
    GLsizei stride;
    vector<CAttrib*> attribs;
};

#endif  // #ifndef KVERTEXBUFER_H

.cpp


#include "KVertexBuffer.h"

CVertexBuffer::CVertexBuffer()
{
    init();
}

CVertexBuffer::~CVertexBuffer()
{
    destroy();
}

void CVertexBuffer::init()
{
    glGenBuffers(1, &identifier);
    GetError();
}

void CVertexBuffer::destroy()
{
    glDeleteBuffers(1, &identifier);
}

void CVertexBuffer::pointToData()
{
    CAttrib* attrib;
    vector<CAttrib*>::iterator it = attribs.begin();
    for (; it != attribs.end(); ++it) {
        attrib = (*it);
        glEnableVertexAttribArray(attrib->index);
        glBindBuffer(GL_ARRAY_BUFFER, identifier);
        glVertexAttribPointer(
                              attrib->index,                  // index
                              attrib->size,      // component count
                              GL_FLOAT,
                              GL_FALSE,           // normalized?
                              stride,
                              attrib->offset
                              );
        GetError();
    }
}

void CVertexBuffer::disableAttributes()
{
    vector<CAttrib*>::iterator it = attribs.begin();
    for (; it != attribs.end(); ++it) {
        glDisableVertexAttribArray((*it)->index);
    }
}


void CVertexBuffer::uploadData(GLint programID, unsigned int format, float* data, int vertexCount)
{
    stride = 0;
    CAttrib* attr;
    if ((format & FORMAT_POSITION) == FORMAT_POSITION) {
        attr = new CAttrib(glGetAttribLocation(programID, ""), 3, stride);
        stride += attr->size * sizeof(GLfloat);
        attribs.push_back(attr);
    }
    if ((format & FORMAT_UV) == FORMAT_UV) {
        attr = new CAttrib(glGetAttribLocation(programID, ""), 2, stride);
        stride += attr->size * sizeof(GLfloat);
        attribs.push_back(attr);
    }
    if ((format & FORMAT_NORMAL) == FORMAT_NORMAL) {
        attr = new CAttrib(glGetAttribLocation(programID, ""), 3, stride);
        stride += attr->size * sizeof(GLfloat);
        attribs.push_back(attr);
    }
    if ((format & FORMAT_TANGENT) == FORMAT_TANGENT) {
        attr = new CAttrib(glGetAttribLocation(programID, ""), 4, stride);
        stride += attr->size * sizeof(GLfloat);
        attribs.push_back(attr);
    }

    glBindBuffer(GL_ARRAY_BUFFER, identifier);
    glBufferData(GL_ARRAY_BUFFER, stride * vertexCount, data, GL_STATIC_DRAW);
}

Thank you

Jack

Advertisement

I would separate the vertex buffer itself from the vertex attributes. This will decouple the vertex buffer from a program, and allow you to spread attributes across multiple buffers.

How should pointToData() know the attribute locations needed for glVertexAttribPointer() then? Should I just pass an array of the locations through?

There are two ways I would consider:

1) the shader-aware approach (more advanced, more pointless overall) that fills the buffer with all attributes required by the shader, even if they're dummies

2) the "provoking vertex" approach: the VBO contains only those attributes that are set before the first vertex and after Begin(). After this the format becomes locked and defining other properties has no effect. Consider this:

Begin(TRIANGLES);

Color();

Normal3D();

TexCoord2D();

Vertex3D(); <--- provoking vertex, locking in only color, UV and normal

TexCoord2D();

Vertex3D(); <--- uses same color and normal as first vertex, but has its own UV

Tangent3D(); <---- ignored - not defined before provoking vertex

TexCoord3D();

Vertex3D();

End();

As was mentioned, what is inside the shader and what is inside the vertex buffer should be decoupled.


I use my own shader language which uses semantics (exactly like in HLSL) to specify to the shader after compilation which attributes are POSITION, TEXCOORD, NORMAL, etc.


Since there is a fixed number of semantic types and a fixed number of indices for each type (POSITION0, POSITION1, POSITION2, …, POSITION15) the shader builds a 2D table of attribute locations for each semantic.

When it is time to draw, the vertex buffer uses this table to activate attributes as follows (called just before the actual render call, which may end up being called on an index buffer rather than a vertex buffer):


	/**
	 * Prepare for rendering.
	 */
	LSVOID LSE_CALL COpenGlVertexBuffer::PrepareToRenderApi() {
		assert( CFnd::GetShader() );
		CCriticalSection::CLockerS lsLockBindBuffer( m_csOglCrit );
		COpenGl::glBindBuffer( GL_ARRAY_BUFFER, m_uiVboId );	// Even if m_uiVboId is 0, that is perfect.
		// Set vertex attribute locations.
		for ( LSUINT32 I = m_vVertexAttribPointers.Length(); I--; ) {
			const LSG_VERTEX_ATTRIB_POINTER * pvapThis = &m_vVertexAttribPointers[I];
			GLint iAttrib = CFnd::GetShader()->GetAttributeLocs()[pvapThis->ui16LocClass].iLoc[pvapThis->ui16LocUsage];
			if ( iAttrib != -1 ) {
				COpenGl::glEnableVertexAttribArray( iAttrib );
				COpenGl::glVertexAttribPointer( iAttrib,
					pvapThis->ui8Elements,
					pvapThis->ui32Type,
					pvapThis->ui8Normalized,
					m_siStride,
					reinterpret_cast<LSUINT8 *>(pvapThis->pvPointer) + m_ui32OffsetApi );
				CFnd::AssertError();
			}
		}
	}
The vertex buffer knows what attributes it will use and stores them in its own linear array called m_vVertexAttribPointers.
A simple for () loop goes over that, gets each attribute type (POSITION0) and usage index (POSITION0) and quickly gets the location.
The shaders have preprocessed all of their locations in a 2D array and the vertex buffer has preprocessed all of its attribute data (size, type, elements, etc.) making the system very efficient at run-time.


I am able to do this because I wrote a new shader language.
I expect that you are not really wanting to go that far, however this should give you some inspiration about how the locations can stay on the shader and how the vertex buffer can efficiently look them up at run-time. This decoupling between shader and vertex buffer is very important.
You should be able to think of a simpler way to automate the process where the shader can look up information on all its uniforms ahead of time. It is fairly easy to make a custom preprocessing step over your shaders to extract information that you have embedded using custom characters.

You don’t have to invent a whole new language. Just change the way you declare uniforms to something like this:
@uniform vec4 g_vPosition : POSITION0

You can write a very simple parser that looks for lines beginning with @uniform, parses the variable name and usage data, and then inserts into the shader a valid declaration such as:
uniform vec4 g_vPosition;



L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Very interesting thank you everyone.

L. Spiro, would you mind explaining how calling CFnd::GetShader() returns the current shader in this line, it seems a good way of decoupling things?


GLint iAttrib = CFnd::GetShader()->GetAttributeLocs()[pvapThis->ui16LocClass].iLoc[pvapThis->ui16LocUsage];
 

Thanks

Jack

There is simpler way, just bind the same locations for each shader or use layout qualifier: layout(location = ...). This way you don't have to query locations on runtime, so you're decoupled from shader. No need to invent some special semantic for this, just stick to some naming convention.

VertexAttr enum that defines locations for our vertex attribute semantic:


enum class VertexAttr
{
    POSITION = 0,
    COLOR,
    NORMAL,
    TEX0,
    TEX1,
    MATERIAL,
    TANGENT,
    BLEND_INDEX,
    BLEND_WEIGHT,
};

Using binding:


glBindAttribLocation(program, static_cast<GLuint>(VertexAttr::POSITION),        "position");
glBindAttribLocation(program, static_cast<GLuint>(VertexAttr::COLOR),           "color");
glBindAttribLocation(program, static_cast<GLuint>(VertexAttr::NORMAL),          "normal");
glBindAttribLocation(program, static_cast<GLuint>(VertexAttr::TEX0),            "tex0");  

This way every shader will use the same locations. Only drawback is that you're forced to use specified names in GLSL, so when you want VertexAttr::POSITION you have to use "position" name in your shaders. This is not really a problem, but if you want for some weird reason flexibility in naming, other way to do this would be to predefine attributes in your C++ code (same as before, so enum VertexAttr that has 0..n attribute locations) and prepend them to each shader you compile:


// GLSL code prepended to each shader source

#define POSITION 0
#define COLOR    1
#define NORMAL   2
#define TEX0     3

This will require a bit more information than just enum, you need to have an array that will map enum VertexAttr to its GLSL semantic, so "POSITION", "COLOR", but you get idea:


const char* VertexAttrSemantic[] = {
    "POSITION",    /* VertexAttr::POSITION */
    "COLOR",       /* VertexAttr::COLOR */
    ...
}

Then you can loop over these structures and create GLSL definitions that you compile with each shader. This allows you to use these #defines in layout like this:


// GLSL

layout(location=POSITION) in vec3 myownname;
layout(location=TEX0) in vec2 texcoord;

So this is 2 ways how you can decouple your vertex attribute semantic from shader and still have a lot of flexibility in how you name/define attributes in GLSL. I like approaches that don't create need to invent whole new semantic, language in language etc. and can be used by people that know only GLSL - I think these two ways are more than enough for average user, but again, I'm not writing some big, universal engine. If you want to keep it simple that would be the way to go IMO.


Where are we and when are we and who are we?
How many people in how many places at how many times?

Firstly I want to say that I am not an advocate of globals and the reasons against them are quite sound, but I elected to simplify my architecture in this case by representing the graphics state as a single per-application global instead of making my renderer instance-based. It is certainly a valid option if you want to make yours instance-based, but I opted to go with a design that ensures only one rendering device is available and can be accessed by anything from anywhere (while maintaining logical hierarchies such that only modules that actually deal with graphics will have access to it) in order to avoid having to pass a graphics object around. It is just a design decision and not provably more correct than going fully instance-based.

So with this in mind, things are fairly simple.

Only one shader can be active at a time. Shaders are instances (certainly), but when you activate one a global pointer to that shader is stored inside the CFnd class (all of is methods and members are static, so it is entirely a global encapsulation of the render state).

CFnd::SetShader() to set the current shader.

CFnd::GetShader() to get the current shader.

When a shader is destroyed it checks within its destructor whether or not it is set as the current shader, and CFnd::SetShader( NULL ) is called if it is, so you can’t have lingering pointers to dead shaders. This is a bit more complex than that to truly ensure this is the case but that is the basic idea.

Finally, CFnd::Render() is called to actually perform a render of anything. First you set shaders, vertex buffers, index buffers, etc. The order doesn’t matter because all of these sets just set pointers to things. Once everything is set, call CFnd::Render().

Inside of CFnd::Render() it makes sure things happen in the proper order. That is, the shader pointer must already be set (which will have been done before the call to CFnd::Render(), so this is guaranteed) before the vertex buffers can get attribute locations. Meaning CFnd::Render() is what actually calls PrepareToRender() on the set vertex buffers.

This setup is actually very important for optimizations as it allows you to greatly reduce redundant state changes and to allow state changes, including vertex-buffer setting and shader setting, to be done in any order.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I love how your data types have the LS prefix for L. Spiro :)

This topic is closed to new replies.

Advertisement