I'm currently making a stateful render-queue based, graphics API independent system, here are the basics:
Given a Scene with some renderable meshes (later with space partitioning). The scene itself is just a "container" for the renderables and other types (cameras, sounds, etc.). Also given a Renderer base class. The DeferredRenderer, ForwardRenderer, etc. are inherited from this class. Each renderer instance can read the scene data but not allowed to modify it. The renderer is the one which makes the actual graphics calls, like set shader, set parameters, draw call, etc. These calls are stored in a simple linear list and sent to the graphics API for rendering.
So for each frame:
1) The scene collects all visible meshes into a list (from scratch, but with some pooling to avoid memory allocation/deallocation)
1.1) There are at least 2 lists: 1 for opaque and 1 for transparent meshes.
2) Each renderer has its own list of visible meshes which are acquired from the scene
2.1) The renderer sorts that list based on its own needs
2.2) For example the Transparent renderer sort the list based on distance only, while the DeferredRenderer sorts based on material, etc.
3) Then the renderer sets global graphics states (like the GBuffer shader and its parameters) <-- for example this is why not stateless (nice article here for a stateless renderer)
4) The renderer iterates over the sorted list and inserts the graphics calls into a RenderQueue
5) The render queue is sent to the graphics API.
This could be done better probably, but I like this approach, and I will see if it's viable or not.
The Actual Question
However my main problem is with the actual meshes and vertex/index buffers. Long time ago I've created a vertex buffer for each mesh (where the mesh means a collection of vertices (array of struct) and indices) and that's it. But the static (and dynamic) batching sounds cool and this is not the best solution anyway.
For now I have:
MeshVertex struct which contains every possible vertex attribute (position, normal, texcoord, etc.etc.)
Mesh class with the following members:
- list of vertices (MeshVertex)
- list of indices
- flags for each vertex attribute: the attributes can be marked as: NotUsed and Used and can be calculated (the normals and tangents)
So my problems:
- somehow I have to build vertex and index buffers <-- static, dynamic batching, but handling more instance of the same mesh (without duplicate the vertex buffer) and removing of a mesh.
- the buffers depend on the actual vertex data and the usage flags (I'm using interleaved arrays for vertex buffers) <-- the actual data stored in the GRAM is filled from the mesh data based on the usage flags. Also a VertexDeclaration is created (and cached) which determines the attributes (offset, size, type, etc.)
- however the shader determines the required vertex attributes <-- I'm not sure what happens when the shader tries to read an attribute which is not currently bound and set properly.
- some renderers (like ShadowMapRenderer) does not need any attribute except the position. Or a 2D renderer does not necessarily need positions as a 3D vector. <-- but if I create only 1 buffer (doesn't matter it's batched or not) for the meshes, every renderer have to use the same buffer(s).
I know it's a bit long story, but I hope you can help me.