Jump to content
  • Advertisement
Sign in to follow this  

[Design] Meshes and Vertex buffers

This topic is 1045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts



I'm currently making a stateful render-queue based, graphics API independent system, here are the basics:


Given a Scene with some renderable meshes (later with space partitioning). The scene itself is just a "container" for the renderables and other types (cameras, sounds, etc.). Also given a Renderer base class. The DeferredRenderer, ForwardRenderer, etc. are inherited from this class. Each renderer instance can read the scene data but not allowed to modify it. The renderer is the one which makes the actual graphics calls, like set shader, set parameters, draw call, etc. These calls are stored in a simple linear list and sent to the graphics API for rendering.


So for each frame:

1) The scene collects all visible meshes into a list (from scratch, but with some pooling to avoid memory allocation/deallocation)

1.1) There are at least 2 lists: 1 for opaque and 1 for transparent meshes.

2) Each renderer has its own list of visible meshes which are acquired from the scene

2.1) The renderer sorts that list based on its own needs

2.2) For example the Transparent renderer sort the list based on distance only, while the DeferredRenderer sorts based on material, etc.

3) Then the renderer sets global graphics states (like the GBuffer shader and its parameters) <-- for example this is why not stateless (nice article here for a stateless renderer)

4) The renderer iterates over the sorted list and inserts the graphics calls into a RenderQueue

5) The render queue is sent to the graphics API.


This could be done better probably, but I like this approach, and I will see if it's viable or not.


The Actual Question


However my main problem is with the actual meshes and vertex/index buffers. Long time ago I've created a vertex buffer for each mesh (where the mesh means a collection of vertices (array of struct) and indices) and that's it. But the static (and dynamic) batching sounds cool and this is not the best solution anyway.


For now I have:

MeshVertex struct which contains every possible vertex attribute (position, normal, texcoord, etc.etc.)

Mesh class with the following members:

- list of vertices (MeshVertex)

- list of indices

- flags for each vertex attribute: the attributes can be marked as: NotUsed and Used and can be calculated (the normals and tangents)


So my problems:

- somehow I have to build vertex and index buffers <-- static, dynamic batching, but handling more instance of the same mesh (without duplicate the vertex buffer) and removing of a mesh.

- the buffers depend on the actual vertex data and the usage flags (I'm using interleaved arrays for vertex buffers) <-- the actual data stored in the GRAM is filled from the mesh data based on the usage flags. Also a VertexDeclaration is created (and cached) which determines the attributes (offset, size, type, etc.)

- however the shader determines the required vertex attributes <-- I'm not sure what happens when the shader tries to read an attribute which is not currently bound and set properly.

- some renderers (like ShadowMapRenderer) does not need any attribute except the position. Or a 2D renderer does not necessarily need positions as a 3D vector. <-- but if I create only 1 buffer (doesn't matter it's batched or not) for the meshes, every renderer have to use the same buffer(s).


I know it's a bit long story, but I hope you can help me. smile.png

Edited by csisy

Share this post

Link to post
Share on other sites

So, you're asking for an architecture advice for your mesh implementation, right?


In my engine's model library I have, basically:


CVertexBuffer - topology type, strides, offsets, API specific data (this is an interface basically)

CIndexBuffer - format, index offset, API specific data

CMesh - vertices, and faces 

SHADER_PART (a simple structure) - vertex and index buffers, textures, colors

CDrawableMesh - shader parts, bounding volume(s)


Then I share a drawable mesh across mesh instances. I guess I probably shouldn't point out an ideal actor hierarchy here because it varies from engine to engine. But the basic idea (at least for static meshes) is to cache drawable meshes and not graphics buffers (e.g. don't load them more than once!).


When creating vertex/index buffers for shader parts, just after loading the meshes from a stream at initialization, I pass a structure indicating the usage flags you're talking.


- however the shader determines the required vertex attributes <-- I'm not sure what happens when the shader tries to read an attribute which is not currently bound and set properly.


Shader management was quite discussed here previously. Search on the forums. Nevertheless, I don't feel prepared for pointing out any ultimate solution for a shader system. Sorry! But AFAIK, even on your intermediate graphics framework, you definetly must have some kind of shader manager for each object type on your game (e.g. models, terrains, skeletons, etc.). 


Hope that helps.

Share this post

Link to post
Share on other sites

the shader determines the required vertex attributes <-- I'm not sure what happens when the shader tries to read an attribute which is not currently bound and set properly.

Yep. The shader determines a list of attributes that are required. The mesh gives you a list of attributes that exist.
Before you draw something, you need to resolve this issue by selecting the right VertexDeclaration/InputLayout/VAO-config. Yep, each mesh requires more than one VertexDeclaration -- you need one of them for each pair of shader-attributes and buffer-attributes.

If you can't find a valid VertexDeclaration (because the shader requires an attribute that doesn't exist), then announce loudly that there's an error in the data so that your content creators fix the data.
On the other hand, if an attribute exists, but isn't required by the shader, then it simply should not be present in the VertexDeclaration -- pick one that contains only the required attributes and leaves all others out.


some renderers (like ShadowMapRenderer) does not need any attribute except the position.

This is fine - the mesh will just use a different VertexDeclaration when it's used with this different shader.
You need a shader management system that bundles up many programs into one "effect". e.g. the Microsoft FX system allows you to create one "effect" file, which contains a forward-rendering technique and a shadow-mapping technique. Your material chooses which effect to use, and then your renderer chooses which technique to pick out of the effects -- which determines the attributes that are required, which determines the appropriate VertexDeclaration to pick for each mesh.
The optimal vertex layout will depend on which shaders it's used with. In this example, instead of:
you may want your buffer to be laid out like:
As this way the position-only shader will be more optimal, and the normal shader will still perform fairly well.

Share this post

Link to post
Share on other sites
Thanks for your comments, were really helpful! :)
First I thought I will hard-code this but it's trivial to solve. Here is the plan:
The Mesh class remains a description of the mesh data (aka vertices and indices) nothing more.
The VertexDeclaration is a simple "container" class which holds information about the actual vertex data layout: offset, size, type, stride for each attribute.
I will create a new class which describes the required vertex attributes for a shader. Note this is similar to the VertexDeclaration, but the actual layout is not important here. So it contains only the size (or number of components) and the type for each attribute.
The MeshRenderer component (similar to the Unity's MeshRenderer) is responsible for creating the vertex and index buffers based on the Mesh data and the required attributes for the shader assigned to the mesh.
Probably I will create a new DrawableMesh class (similar to Irlan Robson's solution) which contains the Mesh and the buffers references (pointers). This way when I load a mesh from storage (aka as an asset) I will create a new shareable DrawableMesh object --> no duplicates for assets. If a user creates a mesh programmatically, the user is responsible for avoid duplications.
- Always can create a vertex buffer for the required attributes so the driver won't crash on this. (the shader won't work properly because of the invalid data, but hey... it's working!)
- If more than 2-3 types of vertex input exist the memory requirement becomes bigger. But is this a real problem? Probably I will have 2 "declaration" / renderer (for static and animated meshes).
- The mesh contains all of the vertex information even if used or not (another memory drawback).
Any other thoughts?
This is a bit out-of-topic, but...
I've read your shared presentation (Designing a Modern GPU Interface) and I understand the basics, however a complete fx-system seems a big task. For me (at least now...) only one vertex and fragment/pixel shader is linked together.
When the user defines a new material (like in UE4) it actually creates a new shader pair (vertex and fragment shaders). These shaders are loaded and linked together to an effect twice: once for static and once for animated meshes.
I would have questions about this but it's really out of the scope of this topic. Maybe when I reach that problem, I'll create a new one if I won't find a solution.
Edited by csisy

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!