#1
To address the elephant in the room, “modern rendering” excludes the possibility of using OpenGL. Step 1 is to use Metal, Vulkan, or Direct3D 12.
Although you can design a modern workflow with any API (almost), it certainly helps to use a modern one as a guideline so that you properly create your renderer around the use of command buffers.
#2
The next issue to address is the use of pure virtuals in certain ways (here I am talking about index buffers, the render device, textures, etc., not on models/meshes/other high-level cases, which I address in #4). They achieve nothing unless you need to change the API at run-time, which you never will. On any given platform you either must use only 1 API or you will anyway because OpenGL is just a bad idea on Windows®.
A better way is to have a base class, an API-specific class in the middle, and the actual class on top.
CIndexBufferBase
CDirect3D12IndexBuffer CVulkanIndexBuffer CMetalIndexBuffer COpenGlEs2IndexBuffer
CIndexBuffer
CIndexBufferBase contains all data common to all forms of index buffers (such as how many indices there are, how many bytes per index, and optionally a CPU copy of the indices)
Each API class inherits from CIndexBufferBase and handles API-specific functionality, such as creating the index buffer and drawing with it.
CIndexBuffer inherits from one of the API classes depending on which macro is set.
bool CIndexBuffer::CreateIndexBuffer( const void * _pvIndices, size_t _sSizeOfInices, size_t _sTotalIndices ) {
// Error-checking (bad pointers, bad index sizes, etc.)
// Copy data into members provided by CIndexBufferBase.
m_sIndexSize = _sSizeOfInices;
m_sTotalIndices = _sTotalIndices;
// Etc.
// Call API-specific creation function (no need to pass data; it can access m_sIndexSize, m_sTotalIndices, etc.)
if ( !CreateIndexBufferApi() ) { return false; }
// Anything else. Clean-up, etc.
return true;
}
Each of the API-specific classes implements CreateIndexBufferApi(), and there is no need for virtual interfaces at all.
#3
As for what the rendering module does, it provides these types of classes (index buffers, textures, vertex buffers, shaders, samplers, render-queues, etc.) and a wrapper interface for performing draw commands (set culling, set render targets, draw, etc.)
The last thing you want to do is make your renderer aware of models, terrain, etc. Models, terrain, foliage, water, procedural clouds, 2D sprites, etc. all use the renderer to draw themselves.
That means they create the index buffers, vertex buffers, shaders, textures, any resources they need by themselves. They manage, update, and destroy these resources by themselves. They activate textures where they know they are needed.
This only makes sense. Having a centralized location (a renderer module) trying to manage how all of these types of objects render is a gross violation of the single-responsibility principal and invariably leads to monolithic spaghetti code.
The renderer module is low-level. Everything can access it and do what they want. It’s only job is to provide a universal interface so that models, terrain, etc. don’t have to worry about which API is being used.
#4
Finally, the high-level flow of the engine, which necessarily must include other modules besides the rendering module.
All objects in the scene are in an array inside the scene manager. The scene manager does exactly what it says. It manages objects and passes data around to different modules so that physics can run, rendering can happen, etc. It lives inside the engine module itself (the highest-level module in an engine).
When it is time to draw, the scene manager may do different things for different types of objects (terrain culling and rendering is vastly different from models and foliage, for example), but for now we will only focus on rendering models.
It gathers a list of meshes (multiple meshes create a model) by traversing your world’s spacial partitioning scheme (typically and octree) with the camera frustum. The objects in this list may get a “pre-draw” command to allow them to prepare for rendering. This could be executed on a separate thread while the scene manager continues preparing.
Each mesh may require multiple draw calls to render (different materials on a single mesh, multiple layers, etc.) The scene manager goes over each mesh and passes them 2 render-queues (one for opaque, one for translucent). Each mesh knows how many passes it takes to render itself, so it adds as many render-queue items to the queues as needed. Each item has a shader ID, base texture ID, distance from camera, etc. (anything useful for sorting). There are many topics on this site about how to use render-queues.
The render-queues are sorted and then the scene manager goes over the opaque first and then the translucent. The meshes are then told to render each submission they made to the render-queue in the now-sorted order. This means the meshes set their own vertex/index buffers, textures, shaders, etc. This ensures that you can have objects rendering in wildly different ways (water, terrain, clouds, meshes, imposters, foliage, volumetric fog, etc.)
In this case, meshes etc. are using virtual functions (to address your specific usage of virtual functions).
Don’t break design by having too much of a focus on cache locality etc. The mesh structure you proposed, even in its short size, already has a flaw, since the world matrix of an object is not related to the mesh. A mesh is for rendering. The world matrix is only borrowed for rendering, but is also used for physics, etc. Objects can have world matrices and not be renderable items. Your proposal suggests that in order to exist in the world it must also have a vertex and index buffer, which is simply not the case.
It is much more important for objects to have good logical design and connections with each other than to have better cache utilization.
L. Spiro