People tend to understand the limits of physics libraries very well because there are so many examples. Bullet Physics, for example, is used by many and we use it at work.
The graphics library sits at exactly the same level as the physics library, so if the physics library had knowledge of what a model was then Bullet Physics would not be very popular, would it?
It doesn’t make sense for a graphics library to have any knowledge what-so-ever of models.
A basic search may have revealed that I answered a similar question in very
thorough detail so soon enough ago that it is still on the first page of this section as of writing.
I answered in such detail then so that it could be a fully citable source when similar questions arose, so here it is:Game Engine Layout
I have a render manager that will draw all my objects. That way I can sort them by shader, texture etc so that there are less state switches on the graphics card.
Why don’t you sort them anyway, without having the “render manager”?
Just because you sort by shader, textures, and depth does not mean your graphics library needs to know anything about what a model is. At best you’ve just described a scenario in which it needs a u32 for shader ID, u32 for texture ID’s, and f32 for depth.
Why know about a whole model?
This is a hefty violation of the Single Responsibility Principal
The SceneManager is the only thing that has such a high-level view of the…scene.
It has a list of every object in the scene and is the only place where culling can take place.
That does not mean the function to perform culling belongs to the render scene. Think about it. Culling can be done for other reasons, and is also utilized by physics.
it is possible for the scene manager to be pulling the strings overall but still delegate certain tasks off to other sub-systems lower down.
It is exactly the same with render queues.
There is no render manager. The graphics library facilitates rendering, not manages it.
The scene manager gathers the required information on each object in the scene and sends that off to a render-queue object. This is a simple utility class that may be provided by the graphics library, but it does not mean the graphics library has to pull the strings to make it work. You don’t need a render manager to make a simple RenderQueue class work. All it does it sort. Why would you need a RenderManager class for that to work?
I won’t pick anyone else out of the crowd because it applies to basically
everyone else who replied (but not everyone, and even though I disagree with some replies on this point it does not mean I disagree with those replies entirely).
Once again: Graphics libraries are at the same level as the physics library and have no business knowing what a model is.
I pick option #3: Let them work together to create a final render.
Think sternly and resolutely about the idea that the graphics library is at exactly the same level as the physics library.
How do we make the physics library work without knowledge of models, terrain, etc.?
By letting a model store the things the physics engine needs to know and then feeding only
those things to the physics engine.
That means positions, velocities, collision geometry, mass, etc.
How does that information get into the physics engine?
By having a higher-level “scene manager” run over each object and gather that information for each object into structures, not by sending actual models to the physics engine.
A graphics library may not know what a mesh is, but it understands what render states, vertex buffers, textures, shaders, and index buffers are.
So having a mesh fill out a structure full of render states, texture pointers, vertex-buffer pointers, etc., to be fed into the graphics engine, allowing the graphics engine to set all said states and apply textures, vertex buffers, index buffers, and shaders, is an abstract means by which the 2 libraries can work together.
The model library still knows about the graphics library because it needs to know how it itself needs to be rendered. Whether it performs the actual render or not is beside the fact, because a render can be performed with only the elements found inside the graphics library without the graphics library needing to know anything about the model being rendered. So it is obvious how the model library should be above the graphics library and the graphics library should have no clue what a model is.
Why is it so important for a graphics engine not to know about models?
I mentioned this already
, but you can’t forget that if you are using GeoClipmap terrain, vegetation, building interiors, volumetric fog, etc., each of these things has a very high-level way of being rendered unique to itself, and trying to encompass all that into a single “RenderManager” object is pure insanity.
So in the end, no matter what, there has to be communication between libraries, and there absolutely must be clear separation between low-level functionality, middle-level functionality, and high-level functionality.
The graphics library provides the lowest-level functionality as well as middle-level functionality such as render queues.
The models, terrain, vegetation, volumetric fog, skyboxes, etc., consume the middle level.
The highest-level sector is the scene manager, which talks to the models, terrain, etc., to ask them questions about how the high-level processing should proceed. For example, with reflections enabled a model may request that the scene manager prepare a (specific) cube-map as a render target along with properties specific to that type of render (such as which model not to include in that render (itself)).
Likewise, GeoClipmap terrain may request a series of render-targets and shader swaps to perform not only the rendering but the other GPU processing it needs to do as well.
Volumetric fog requires multiple passes that only volumetric fog knows how should go.
In other words, all of these middle-level objects are communicating with the high-level scene manager to set up the rendering process.
The high-level scene manager borrows from the middle-level area of the graphics library to sort render queues, and the middle-level objects used their knowledge of the graphics engine to create structures that the graphics engine can use to set states, textures, shaders, etc.
Ultimately, they are all working together.
The answer is not as simple as #1 or #2. It’s #3. When you can’t decide between 2 choices, it is most-often because you did not consider #3.
For a graphics library to know what a model is is an absolute fallacy.