Hey there,
I'm currently designing some framework code that I intend to re-use a whole lot for future game projects (you could call it a graphics engine but I won't as I am afraid I'll get hit by the big old broadsword that is MAKE GAMES NOT ENGINES) and I have a very basic question that somehow feels rather important. I do now notice that I've actually faced the same question before when I started writing games or frameworks but never actually gave it much thought.
First things first, I'm working in C++ (that might be relevant) using SDL and Direct3D 9 (this should not be).
At this point all I want to think about is how to render sprites - let's actually go one more step back into easy territory and make it colored rectangles. That's all I'm interested in for the moment. My basic setup is that I have a class called Renderer and a class called ColoredRectangle, the renderer contains an std::vector of the things it can actually render - ColoredRectangles in this example. To me it seems as if I'm immediately torn between two approaches when it comes to rendering something: do I add a ColoredRectangle::Render function that is called by the renderer class? Or do I in fact add a Renderer::RenderColoredRectangle function and use the ColoredRectangle only to store data (such as the rectangle's position, size, color etc.) or to compute stuff or whatever has to be done to the actual object except for rendering.
I notice that I always went for the former alternative - I suspect because it is easier and more straightforward to do while coding (seems that way anyhow) but when thinking about it I actually strongly favor the second route now. The latter approach means I only need to refer to SDL and Direct3D headers and library files in the renderer source and header file which should be much neater. But then it also leads to a pretty damn big Renderer class in a more real scenario where there are sprites, 3d models, 3d environments, effects and such to render.
I'm also wondering how to actually avoid having to make all the ColoredRectangle attributes (position, size, color and so on) public members as the renderer needs to access them. Is the protected keyword the answer in this regard? Oh, and often I find myself deriving ColoredRectangles from other classes (let's say a Rectangle class, maybe that one derives from a generic Renderable class) - even protected members throw me problems when I'm trying to access them over multiple derivations (say the Renderable class already has a position member - even if it's set to be protected instead of private it still won't allow me to access it from the Renderer class). I should probably read up on the protected keyword.
Anyways, my question is really whether or not there are strong arguments pro or con the first and second way of dealing with things and if my view on things is shooting in the right direction generally.
Any help is appreciated and if anything is unclear just mention it and I'll do my best to clarify.
Basic yet fundamental code design question
At this point all I want to think about is how to render sprites - let's actually go one more step back into easy territory and make it colored rectangles. That's all I'm interested in for the moment. My basic setup is that I have a class called Renderer and a class called ColoredRectangle, the renderer contains an std::vector of the things it can actually render - ColoredRectangles in this example. To me it seems as if I'm immediately torn between two approaches when it comes to rendering something: do I add a ColoredRectangle::Render function that is called by the renderer class? Or do I in fact add a Renderer::RenderColoredRectangle function and use the ColoredRectangle only to store data (such as the rectangle's position, size, color etc.) or to compute stuff or whatever has to be done to the actual object except for rendering.
I notice that I always went for the former alternative - I suspect because it is easier and more straightforward to do while coding (seems that way anyhow) but when thinking about it I actually strongly favor the second route now. The latter approach means I only need to refer to SDL and Direct3D headers and library files in the renderer source and header file which should be much neater. But then it also leads to a pretty damn big Renderer class in a more real scenario where there are sprites, 3d models, 3d environments, effects and such to render.
I'm also wondering how to actually avoid having to make all the ColoredRectangle attributes (position, size, color and so on) public members as the renderer needs to access them. Is the protected keyword the answer in this regard? Oh, and often I find myself deriving ColoredRectangles from other classes (let's say a Rectangle class, maybe that one derives from a generic Renderable class) - even protected members throw me problems when I'm trying to access them over multiple derivations (say the Renderable class already has a position member - even if it's set to be protected instead of private it still won't allow me to access it from the Renderer class). I should probably read up on the protected keyword.
I personally favor the first alternative because then there is no need to rewrite renderer code for each new renderable class added.
In my project I have the code set up as follows:
Graph - this is the container of renderables. There can be many graphs in frame - specifically control objects that do not use depth buffer are kept in separate graph from game objects.
Renderable - base class for few generic renderable objects. Currently I only use static mesh, but few other like dynamic mesh (GPU skinned), particle collection and octree can be added.
RenderContext - actual renderer. It works by invoking ::display virtual method of all renderables in graph. In the implementation of ::display renderables perform frustum culling, and submit fragments of geometry (Transform + VBuffer + IBuffer + Material) to be scheduled to render by RenderContext (depending on material to either opaque or transparent pass). The actual rendering is later performed by ::render method of material class implementations (loading correct GPU programs, textures and setting up uniforms).
All game objects are in different container (world tree) and have links to proper renderable object. It is the task of game objects to keep the renderable state up-to-date.
Simple geometries (like your colored rectangles), CPU skinned meshes and environment meshes can easily be represented by single renderable class (StaticMesh) initially. Once you need more serious optimization, you can implement specific renderable versions, like GPU skinned mesh etc.
Lauris covers it nicely. I recommend you further look at Ogre3D and how it implements its scene graph (which is what Lauris is basically describing) - a very elegant solution to your problem.
Personally, I would also advise against a per-object Render call, just because this will eventually translate into doing a single Dx draw call per object, which is Very Bad™. Usually, you want to batch some geometry and instance another. Furthermore, it is most optimal to sort your objects in such a way that you group similar materials in order to minimize shader, technique and shader constants switches (RenderState changes).
Thus, in my experience, it is easiest to implement use your objects as pure "data" and having your graphics device sort/cull/LOD/etc. them appropriately, then batch them together or create instance data buffer, and finally draw.
Also, if you are worried about exposing private/protected vars and dont want a billion Get/Set() functions, here's what I do in my own code. Each CObject has a list of CMeshes it uses. Now, a CMesh inside has a ptr to CMeshRenderData (inheriting from a base CRenderData), which contains stuff needed by your graphics device (like pre-created vertex/index buffers). When I load up a CMesh, I then call something like GraphicsDevice()->InitRenderData(myMesh), which creates the CMeshRenderData and sets it for the mesh. When I render my scene, I look at all objects, pull out their CMesh, and then pull out the CMeshRenderData, and render the meshing using only that (plus the transform matrix from the original object).
The nice benefit of this solution is that it neatly separates your graphics code from your gameplay. Your graphics device, when calling InitRenderData(), takes your game code, and uses that to create whatever it needs for rendering, in a format most optimal for fast sorting, batching and rendering (vertex/index buffers for Meshes, animation frames baked in a texture for a MeshAnimation, Effects for shaders, D3DXFonts for fonts etc.). You can then tweak and rewrite your whole rendering pipeline, including redefining your mesh vertex declarations, or sort orders, without even touching your "engine" code! Heck, you can then swap out your DX rendered for OpenGL, simply redefining all your CRenderData classes to use OpenGL structs instead of DX and rewriting your CGraphicsDevice::Render() function.
The one con of this is data duplication. Your engine's CMesh and your rendere's CMeshRenderData both contain the same information, just in a different format (one usable by the engine, gameplay, physics simulation etc, the other usable by the rendered). I further alleviate this problem by "clearing" original mesh data after creating its RenderData (since the renderer has everything it needs in it). But if you want to do something like per-poly hit detection (which you probably don't for 90% of your objects), then you need to keep both in memory. However, those are usually special cases which you probably should handle differently anyway (such as building a BSP for all the polies, and using that both in engine collision test, and rendering as well).
Personally, I would also advise against a per-object Render call, just because this will eventually translate into doing a single Dx draw call per object, which is Very Bad™. Usually, you want to batch some geometry and instance another. Furthermore, it is most optimal to sort your objects in such a way that you group similar materials in order to minimize shader, technique and shader constants switches (RenderState changes).
Thus, in my experience, it is easiest to implement use your objects as pure "data" and having your graphics device sort/cull/LOD/etc. them appropriately, then batch them together or create instance data buffer, and finally draw.
Also, if you are worried about exposing private/protected vars and dont want a billion Get/Set() functions, here's what I do in my own code. Each CObject has a list of CMeshes it uses. Now, a CMesh inside has a ptr to CMeshRenderData (inheriting from a base CRenderData), which contains stuff needed by your graphics device (like pre-created vertex/index buffers). When I load up a CMesh, I then call something like GraphicsDevice()->InitRenderData(myMesh), which creates the CMeshRenderData and sets it for the mesh. When I render my scene, I look at all objects, pull out their CMesh, and then pull out the CMeshRenderData, and render the meshing using only that (plus the transform matrix from the original object).
The nice benefit of this solution is that it neatly separates your graphics code from your gameplay. Your graphics device, when calling InitRenderData(), takes your game code, and uses that to create whatever it needs for rendering, in a format most optimal for fast sorting, batching and rendering (vertex/index buffers for Meshes, animation frames baked in a texture for a MeshAnimation, Effects for shaders, D3DXFonts for fonts etc.). You can then tweak and rewrite your whole rendering pipeline, including redefining your mesh vertex declarations, or sort orders, without even touching your "engine" code! Heck, you can then swap out your DX rendered for OpenGL, simply redefining all your CRenderData classes to use OpenGL structs instead of DX and rewriting your CGraphicsDevice::Render() function.
The one con of this is data duplication. Your engine's CMesh and your rendere's CMeshRenderData both contain the same information, just in a different format (one usable by the engine, gameplay, physics simulation etc, the other usable by the rendered). I further alleviate this problem by "clearing" original mesh data after creating its RenderData (since the renderer has everything it needs in it). But if you want to do something like per-poly hit detection (which you probably don't for 90% of your objects), then you need to keep both in memory. However, those are usually special cases which you probably should handle differently anyway (such as building a BSP for all the polies, and using that both in engine collision test, and rendering as well).
ColoredRectangle should define a colored rectangle (position, orientation, color, etc). It should not draw itself. Does it maintain a render context and everything else needed to do so? No. That's the job of a different class (eg Renderer).
You don't gain anything by having your ColoredRectangle contain a chunk of rendering code that's completely dependent on external state.
You don't gain anything by having your ColoredRectangle contain a chunk of rendering code that's completely dependent on external state.
ColoredRectangle should define a colored rectangle (position, orientation, color, etc). It should not draw itself. Does it maintain a render context and everything else needed to do so? No. That's the job of a different class (eg Renderer).
You don't gain anything by having your ColoredRectangle contain a chunk of rendering code that's completely dependent on external state.
Makes sense, that's exactly what I was thinking.
@Koobazaur: You agree with Lauris and then talk about the opposite (or did I misunderstand you there)? Or maybe I'm misunderstanding Lauris. In any case, your points are valid, thanks!
I think I'll go with the Renderer::RenderColoredRectangle style and get/sets for the moment, feels the easiest to do and I'm not overly worried about the amount of code these take up. Don't know why I didn't think of these when dealing with private members of classes, doh!
Thanks everybody, this solves my problem. Case closed unless there are additional arguments for adding a ::Render function to every Renderable or there is additional discussion I guess!
Sorry to hijack the thread, but Lauris would you mind posting a sample code? I like the idea, but it would be helpful to see some code.
regards, D.Chhetri
regards, D.Chhetri
I think I'll go with the Renderer::RenderColoredRectangle style
[/quote]
If this function name does what it says, I'd like to propose an alternative. You want to get away from telling your renderer how to render things, and you don't want to do a render call per object, as Koobazaur said. Maybe this is what you intended with your function, but based on how it is named I would expect this to execute code to render a single rectangle, which can be inefficient.
You should make your renderer aware of everything you would like to be rendered, and then issue a command for it to render the frame. This allows it to sort objects by material/shader, to bypass objects that it knows are obscured, to sort objects front to back, etc. You should give your renderer as much flexibility as you can so that it can do optimizations that are greater than the scope of a single object. If you tell your renderer "You must render this thing now", than you're losing a lot of potential optimization. In this way Renderer::RenderObject or Object::Render are equally bad.
Maybe you already understood this, but your function name is misleading in that case.
Make a type called ColouredRectangleRenderer. Have it take a dependency on a type that encapsulates the services/behaviour required that is common among renderers (i.e., an encapsulation of the state of your monoloithic Renderer type). You could assemble a list of ColouredRectangles and pass it to this type, or whatever you want the API to be.
A way using less OO is definitely worth a thought or two. Especially sprites are a kind of renderable where the overhead to geometry ratio is bad when it comes to rendering. Moreover, as karwosts has already mentioned, do they introduce inter-object forces (like the rendering order due to transparency handling). This leads IMHO naturally to use less single all-round structures (like scene graphs) but to use several and more specialized structures.
Think of an object that provides sprite services (some people may name it a sub-system, other a manager). There is an entity / game object / GO / ... that uses a sprite as its visual representation. Doing so means that the entity will register the sprite representation with the sprite services as soon as the entity starts to participate in the scene (e.g. when it gets instantiated or switched on in a scene graph if you want so). So the data of all (active) sprites are concentrated in the sprite services. The services are then able to perform inter-sprite operations in an efficient manner, e.g. depth sorting and collision detection. But also not (or not necessarily) inter-sprite operations like visibility culling usually benefit performance-wise from a concentrated processing. Batching sprites is the next advantage. The sprite services is the natural place where to manage the mesh(es) used for batching. It can be seen as a first step in rendering, namely to render the vertex representation from the sprite data, resulting in a couple of rendering jobs that gets passed to the graphics rendering.
Nothing else is done with particles: Each particle by itself is a lightweight. They are collected, managed, updated, ... inside the particle system. However, there is no need to restrict this thoughts to lightweight and/or 2D objects (e.g. spatial services may provide the mechanisms for static as well as for dynamic placement of entities in the world; it may further provide collision detection).
Think of an object that provides sprite services (some people may name it a sub-system, other a manager). There is an entity / game object / GO / ... that uses a sprite as its visual representation. Doing so means that the entity will register the sprite representation with the sprite services as soon as the entity starts to participate in the scene (e.g. when it gets instantiated or switched on in a scene graph if you want so). So the data of all (active) sprites are concentrated in the sprite services. The services are then able to perform inter-sprite operations in an efficient manner, e.g. depth sorting and collision detection. But also not (or not necessarily) inter-sprite operations like visibility culling usually benefit performance-wise from a concentrated processing. Batching sprites is the next advantage. The sprite services is the natural place where to manage the mesh(es) used for batching. It can be seen as a first step in rendering, namely to render the vertex representation from the sprite data, resulting in a couple of rendering jobs that gets passed to the graphics rendering.
Nothing else is done with particles: Each particle by itself is a lightweight. They are collected, managed, updated, ... inside the particle system. However, there is no need to restrict this thoughts to lightweight and/or 2D objects (e.g. spatial services may provide the mechanisms for static as well as for dynamic placement of entities in the world; it may further provide collision detection).
Excuse me, but what is "the big old broadsword that is MAKE GAMES NOT ENGINES"? I'm new here, so could someone direct me to a post explaining that? Why not make engines?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement