Sign in to follow this  

Should game objects render themselves, or should an object manager render them?

Recommended Posts

betawarz    106

I'm learning how to program games using C++ and D3D11. I've got a basic 3D application that I re-factored from one of the simple tutorials; the one with a single spinning cube. I'm about to write a basic object manager, and pull the code for the cube out into an object class.


Now, I've been reading about object managers and game objects, etc. I've read two different takes on it.


  1. Each game object is responsible for updating and drawing itself. So each object has its own draw method that the object manager would call.
  2. Game objects only contain the information needed to draw itself; the object manager actually handles the drawing logic using that information.

I'm familiar with the first method, because it's pretty simple to wrap my head around. I've seen it used in quite a few examples and stuff. I'm trying to grok the 2nd way, though. Wouldn't the draw function of the object manager need to iterate over each object, and depending on the type do specific stuff? This would result in a pretty massive draw function with a bunch of if-else branches based on the object type. Is that good? I know it consolidates all the drawing logic into one area, so it's not spread out over many different object classes, so I could see that being a bonus.


Just looking for some advice on which method to go with, I guess. Thanks!

Share this post

Link to post
Share on other sites

The second method would actually be use the same code for all objects. This would work in that by general rule you'll end up doing exactly the same with every object, just changing a bunch of things which you could very well specify as object properties.


Note this doesn't mean objects don't handle how they should look. Objects would update their properties as needed to manipulate how they look (in the same place where their logic is handled), and then the object manager simply goes through the visible objects and uses that data to draw them.

Share this post

Link to post
Share on other sites
Megahertz    286

I've been looking to cross this bridge as well. Previously all my projects have had the "object draws itself" method implemented and ive been trying to get my head wrapped around a way to get out of that into something more flexible.


I get the basic concepts but when you throw in shaders (which i have little/no experience with) I'm at a loss as how to structure things.

Share this post

Link to post
Share on other sites
CJThomas    533

I have a render manager that will draw all my objects.  That way I can sort them by shader, texture etc so that there are less state switches on the graphics card.


Any object that is drawn has a render component that holds all the required information.  The render systems goes over all objects that have a render component and creates a RenderItem for each one that contains a Mesh, Texture and Effect pointer, along with a Tranformation Matrix.  The render manager then sorts and draws all objects.


I am only doing basic sorting at the moment but later when I have more objects I will implement culling which will save time by not drawing objects that can't be seen

Share this post

Link to post
Share on other sites
nhatkthanh    334

Create a manager with various rendering buckets, base on materials, shaders etc...  Then whichever bucket the object belong to, add it to that bucket (probably some sort of reference, id or something, as you might have another manager that manage all the objects).  When it time to render, you can render all the objects from different buckets.

Share this post

Link to post
Share on other sites
Hawkblood    1018
Either method works fine on their own. It depends on how your program NEEDS to do it. My current project absolutely needs a controlled render section where all the objects are rendered in a specific order. I have various distances that have different projection matrix's applied to them and clearing the z-buffer each time to keep objects from overlapping...... I will eventually render other scenes that I imagine will require the "object draw" method like skinned meshes. Use a method that works for your application.

Share this post

Link to post
Share on other sites
AvengerDr    751

my "engine" uses different object representations: object are added to the world using a scene graph but that is not used for rendering as it would not be the most efficient way. Rather, after the scene is complete, a "SceneManager" examines the graph and computes the most efficient way to render it. As it has been said, objects are grouped according to materials, geometry used, rendering order and other properties. This scene manager returns a list of "commands" that the rendering loop executs. Commands can be of various types, i.e.: generate a shadow map, activate blending, render objects and so on. 


Another thing that I've been doing is separating the object class from the geometry class. In my engine, the object represents the high-level properties of a mesh such as its local position, rotation, etc. (local because the absolute values are obtained according to the scene graph). Whereas the geometry class contains the actual vertex/index buffers. There is only one geometry instance for each unique 3D object in the world.


This helps further improve the rendering efficiency. After each object has been grouped into materials then I further group each one of these objects according to the geometry used. Then for each Material/Geometry couple I issuse a "Render Command" to render all the objects that use the same materials and reference geometry. This way there will be only one setVB/IB command per group. This also helps with hardware instancing: if a material supports it, then I just use the list of object instances to compute an instance buffer.

Edited by AvengerDr

Share this post

Link to post
Share on other sites
Shannon Barber    1681

That depends on what "render themselves" means.

If you mean putting OGL/D3D calls right into an object::render method then no, that's a terrible idea.

That makes it too easy to break the graphics when you add a new object into the game.

I do it this way for simple things where performance is a non-issue and I'm in a hurry.

e.g. prototype a replacement for Simulink. (It's 2D drawing the most complex call in the graphics is to turn on anti-aliasing line drawing.)


If you mean putting abstracted graphics routines into the object then that's sub-optimal for performance but might be easy to code with.


To maximize performance, I believe you need to submitted the graphics to the GPU in batches so that you keep the GPU & CPU both working simultaneously and to maximize performance of the GPU you want to minimize rendering state-changes.


Suppose you were making an RTS and 50 of the things on the screen were the same base tank model. You want to change all the states for every tank over and over again. It'd probably be better to draw all the faces & same textures on all the tanks at same time. Then switch to the next texture. Given the small number of pixels the tanks would actually use up, it'd probably be just as fast to draw 50 tanks this way as 2 or 3 the previous way.


Some things, like mirror effects, have to be drawn last so you can't even draw then correctly if you attempt to draw them "in line" as you transverse your spatial sorting. You have to queue it for later. Before 'custom shaders' were the norm, we'd write 'shader' based rendering engines. We'd have graphics code that we called a "shader". Different part of the various models would pick which shader drew that part of the model.

I would cull objects with a sphere tree, submit their shaders to the renderer which hash sorted them based on their priority (a constant that is unique and part of the shader). The priorities were carefully picked to minimize renderer state changes and-also guarantee the correct execution order for things like mirrors. With constant custom shaders this would let you draw everything that used the same shader all together, then switch to the next shader. This presumes it is unlikely that you would use the same textures with different shaders; if you're changing the shader I think you're probably going to be changing textures as well. So I sorted by shader first, texture second.


When I say "sort" I do no mean an O(n²) sort. I mean something like hash or a priority-heap.

Share this post

Link to post
Share on other sites
haegarr    7372

I second to move rendering outside the objects itself. But an object manager (although a term with interpretation possibilities) is not the correct place either. Following the name "manager", it stands for being responsible for the lifetime of objects and perhaps attributed queries, but not also for rendering; this would violate the one-responsibility principle. However, the main reason is that "drawing" doesn't describe very well what's really going on when it comes to shading.


When looking at the problem without having a specific result in mind, one finds that there are various rendering methods that can be used. The keywords are forward shading, deferred shading, NPR, ..., with or without attributes like tiled, clustered, ... and so on. Additionally, special handling of transparency, mirroring, portals, overlays ... is needed. According to the rendering method one or more passes are needed, and image composition may be needed to compose several passes to the end result. Transparency and friends usually introduce a force on the order of processing, perhaps giving more passes to run as well.


Optimization like ordering RenderItems along their material (as an example) meant to have knowledge of the rendering method. E.g. a depth-only pass is totally independent on material (letting transparency aside). Instead, an ordering front to back would be an optimization, because of fill rate reduction.


Whether one implements an engine with switchable rendering method, or else a game with a single method, is IMHO irrelevant at this point. The task of rendering is a stand-alone thing due to its inherent complexity as well as its self-evident meaning.


That said, my advice is to attach the information bits what to render to the objects, and let the graphics sub-system do the rendering in the way (i.e. how) defined by the installed rendering pipeline (meaning the implementation of a rendering method).

Share this post

Link to post
Share on other sites
L. Spiro    25618
For a graphics library to know what a model is is an absolute fallacy.
People tend to understand the limits of physics libraries very well because there are so many examples. Bullet Physics, for example, is used by many and we use it at work.

The graphics library sits at exactly the same level as the physics library, so if the physics library had knowledge of what a model was then Bullet Physics would not be very popular, would it?
It doesn’t make sense for a graphics library to have any knowledge what-so-ever of models.

A basic search may have revealed that I answered a similar question in very thorough detail so soon enough ago that it is still on the first page of this section as of writing.

I answered in such detail then so that it could be a fully citable source when similar questions arose, so here it is:

Game Engine Layout


I have a render manager that will draw all my objects.  That way I can sort them by shader, texture etc so that there are less state switches on the graphics card.

Why don’t you sort them anyway, without having the “render manager”?
Just because you sort by shader, textures, and depth does not mean your graphics library needs to know anything about what a model is. At best you’ve just described a scenario in which it needs a u32 for shader ID, u32 for texture ID’s, and f32 for depth.

Why know about a whole model?

This is a hefty violation of the Single Responsibility Principal.

The SceneManager is the only thing that has such a high-level view of the…scene.
It has a list of every object in the scene and is the only place where culling can take place.
That does not mean the function to perform culling belongs to the render scene. Think about it. Culling can be done for other reasons, and is also utilized by physics.
So clearly it is possible for the scene manager to be pulling the strings overall but still delegate certain tasks off to other sub-systems lower down.

It is exactly the same with render queues.
There is no render manager. The graphics library facilitates rendering, not manages it.

The scene manager gathers the required information on each object in the scene and sends that off to a render-queue object. This is a simple utility class that may be provided by the graphics library, but it does not mean the graphics library has to pull the strings to make it work. You don’t need a render manager to make a simple RenderQueue class work. All it does it sort. Why would you need a RenderManager class for that to work?

I won’t pick anyone else out of the crowd because it applies to basically everyone else who replied (but not everyone, and even though I disagree with some replies on this point it does not mean I disagree with those replies entirely).
Once again: Graphics libraries are at the same level as the physics library and have no business knowing what a model is.

I pick option #3: Let them work together to create a final render.

Think sternly and resolutely about the idea that the graphics library is at exactly the same level as the physics library.
How do we make the physics library work without knowledge of models, terrain, etc.?

By letting a model store the things the physics engine needs to know and then feeding only those things to the physics engine.
That means positions, velocities, collision geometry, mass, etc.

How does that information get into the physics engine?
By having a higher-level “scene manager” run over each object and gather that information for each object into structures, not by sending actual models to the physics engine.

A graphics library may not know what a mesh is, but it understands what render states, vertex buffers, textures, shaders, and index buffers are.
So having a mesh fill out a structure full of render states, texture pointers, vertex-buffer pointers, etc., to be fed into the graphics engine, allowing the graphics engine to set all said states and apply textures, vertex buffers, index buffers, and shaders, is an abstract means by which the 2 libraries can work together.

The model library still knows about the graphics library because it needs to know how it itself needs to be rendered. Whether it performs the actual render or not is beside the fact, because a render can be performed with only the elements found inside the graphics library without the graphics library needing to know anything about the model being rendered. So it is obvious how the model library should be above the graphics library and the graphics library should have no clue what a model is.

Why is it so important for a graphics engine not to know about models?
I mentioned this already, but you can’t forget that if you are using GeoClipmap terrain, vegetation, building interiors, volumetric fog, etc., each of these things has a very high-level way of being rendered unique to itself, and trying to encompass all that into a single “RenderManager” object is pure insanity.
So in the end, no matter what, there has to be communication between libraries, and there absolutely must be clear separation between low-level functionality, middle-level functionality, and high-level functionality.

The graphics library provides the lowest-level functionality as well as middle-level functionality such as render queues.
The models, terrain, vegetation, volumetric fog, skyboxes, etc., consume the middle level.
The highest-level sector is the scene manager, which talks to the models, terrain, etc., to ask them questions about how the high-level processing should proceed. For example, with reflections enabled a model may request that the scene manager prepare a (specific) cube-map as a render target along with properties specific to that type of render (such as which model not to include in that render (itself)).

Likewise, GeoClipmap terrain may request a series of render-targets and shader swaps to perform not only the rendering but the other GPU processing it needs to do as well.

Volumetric fog requires multiple passes that only volumetric fog knows how should go.

In other words, all of these middle-level objects are communicating with the high-level scene manager to set up the rendering process.
The high-level scene manager borrows from the middle-level area of the graphics library to sort render queues, and the middle-level objects used their knowledge of the graphics engine to create structures that the graphics engine can use to set states, textures, shaders, etc.

Ultimately, they are all working together.
The answer is not as simple as #1 or #2. It’s #3. When you can’t decide between 2 choices, it is most-often because you did not consider #3.

L. Spiro

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this