Organizing mesh and buffer classes

Started by
4 comments, last by Promit 11 years, 1 month ago

Hi. So far I have this thing that can render a heightmap with per-vertex color and per-vertex shading (huzzah for next-gen graphics!).

The thing is that the GL class I did is tailored to render that specific heightmap and nothing else, so my next step would be refactor it so it can render several objects (and maybe several program objects), but i'm at a loss about how should I set up my mesh class, how to handle the resulting mesh objects and how to relate them to their VBOs and such.

Currently DataMesh holds an array of floats with the already processed (interleaved color,normal,position) data and a FloatBuffer for passing the data to OpenGL (perils of Java Native Interface) but I have to set up something else that connects the mesh with the OpenGL specific things, ie, Vertex Array Objects, Vertex Buffer Object and shaders going to be used to render that mesh.

What should I store in my mesh class? It is ok to store API specific things on it? (ie, VBO id, maybe shader Ids that it uses) Or it should I set up some manager to do that thing? In which case, how would that manager operate? What data it would contain? When generating vbos, there is a "generic" way to handle attribute pointers? Or I should make a method for each possible vbo configuration? (ie, one method for vertex + color, another for vertex + color + normal, another for vertex + normal + texture coords, etc).

On an unrelated note, it is okay to assume that every vertex passed will always have its w component equal to 1.0f?

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement

For me, a Mesh is essentially a VAO, plus some parameters for topology (triangle lists, strips etc) and offsets. It's composed from vertex buffers, an index buffer, and a "vertex declaration" (D3D9 terminology) slash "input layout" (D3D10+ terminology). It's essentially an array of the data that you wind up passing as VertexAttribPointer. Tends to look like this:


struct VertexElement{
    unsigned int Index;
    int Size;
    int Type;
    unsigned int Stride;
    unsigned int Offset;
    class GraphicsBuffer* Buffer;
    unsigned int Divisor;
};

 VertexElement ve[] = {
        { VE_Position, 3, GL_FLOAT, sizeof(SimpleVertex), offsetof(SimpleVertex, position), rawVb },
        { VE_TexCoord, 2, GL_FLOAT, sizeof(SimpleVertex), offsetof(SimpleVertex, texcoord), rawVb },
        { VE_Diffuse, 4, GL_FLOAT, sizeof(SimpleVertex), offsetof(SimpleVertex, color), rawVb },
    };
 

That plus an index buffer is enough information to recompose the relevant GL calls to create a VAO. Each of these things gets paired with a material and a few other things (bounding volumes for example) and away we go.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

For me, a Mesh is essentially a VAO, plus some parameters for topology (triangle lists, strips etc) and offsets. It's composed from vertex buffers, an index buffer, and a "vertex declaration" (D3D9 terminology) slash "input layout" (D3D10+ terminology). It's essentially an array of the data that you wind up passing as VertexAttribPointer. Tends to look like this:


struct VertexElement{
    unsigned int Index;
    int Size;
    int Type;
    unsigned int Stride;
    unsigned int Offset;
    class GraphicsBuffer* Buffer;
    unsigned int Divisor;
};

 VertexElement ve[] = {
        { VE_Position, 3, GL_FLOAT, sizeof(SimpleVertex), offsetof(SimpleVertex, position), rawVb },
        { VE_TexCoord, 2, GL_FLOAT, sizeof(SimpleVertex), offsetof(SimpleVertex, texcoord), rawVb },
        { VE_Diffuse, 4, GL_FLOAT, sizeof(SimpleVertex), offsetof(SimpleVertex, color), rawVb },
    };
 

That plus an index buffer is enough information to recompose the relevant GL calls to create a VAO. Each of these things gets paired with a material and a few other things (bounding volumes for example) and away we go.

I agree with this, plus over the past year or so I have actually started including the type of draw call with the 'geometry' type classes in my engine. This allows you the freedom to use whatever pipeline execution method makes the most sense (i.e. indexed or not, instanced or not, etc...) for that geometry class. In general, I consider any inputs into the pipeline that are used for vertex assembly to be part of the geometry, along with a method for configuring those inputs and executing the draw call.

That keeps everything in a nice clean package, fully encapsulating the concept of a mesh.

Hm, thanks, so putting in the Mesh class the OpenGL specific stuff is reasonable after all...

And my other question? (w coordinate value)

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

That is what I normally do - just expand the R3 based vertex to R4 in the vertex shader when I do the multiply. That coordinate is only there to allow you to do perspective projection, so you can use it as you see fit with your projection matrices. If you don't need to do projection, then you don't even need the w-value at all and can just set it to 1 in the output vertex position.

The only time I have heard of not setting it to 1 is to perform a cheap scaling of the vertex position. If you set the w value to be something other than 1, then when the rasterizer does the w divide then it will scale the position. That isn't really necessary in most cases, so I would say that you can safely assume w=1 in most cases.

Note that GL automatically sets w to 1 when it's unspecified. You don't have to do anything.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

This topic is closed to new replies.

Advertisement