Graphics engine design. A bit confused..

Started by
10 comments, last by skwee 16 years, 4 months ago
Hello folks :) Well first of all I'm not an engine/framework-lover and I'm not going to make a mega engine or something else, just want to wrap some things (like Initialization, some drawing functions and etc) into one class that will be my so called "Graphics engine". Well wrapping things into class not a problem, the problem is presented here: I want to have some objects, actually not "few" but only 3. One object will be some basic things like cube, sphere, parallelepiped and etc. The second one going to be an MD2 (quake2 model format) object, and the last one is BSP (quake/half-life map format) object. The problem that everyone of them will have a different Render method. A cube will be rendered as vertices, MD2 will be rendered using OpenGL commands found inside the file itself, and BSP don't know yet :) I assume it will be vertex arrays. And finally the problem: I can add to every object his own Render() method, but then they become isolated from the engine, I mean engine MUST render object, not the object itself.. The second option is to add Render methods to Engine, things like RenderObject(Object& o), RenderMD2(MD2Model& md2), RenderBSP(BSPModel& bsp). This suits more to the structure of "engine must render objects". But it this case objects not managed through engine (good or bad I don't know). The last one I used it in my last project, Is to create std::list of base class Object, and every model that need to be rendered added to this list and then the Render() method of my engine simply run over the list and call Render() method of object. This Is the options I came with. Can someone help to decide what model is the best? How you do it? How others do it? Thanks a lot :)

I would love to change the world, but they won’t give me the source code.

Advertisement
There is not a single design that is better than all the other ones. Architectures in which objects are responsible for rendering themselves are pretty common.

Another alternative (which I have seen on larger scale frameworks) would be to decouple the rendering part from the object on its own generic self-contained object, which specifies how the object should be rendered and which resources it uses.

RenderAttributes
{
PrimitiveType
ShaderInstance
VertexBuffer
IndexBuffer
}

Every object has RenderAttributes that the renderer can grab and do whatever it needs to do with them. This also helps with sorting rendering by transparents and shader types, etc, etc
Hi,

I don't have a load of time at the moment, but I can provide an idea to getting a nice way to render. It's called the visitor design pattern, and it serves as a solution to extending a class without modifying the class a lot, especially each time you add a new renderable object. Here's how it works.

We have an object that is dedicated to doing the rendering of these objects from their data. This object is called the visitor. When we get to rendering we 'send' this object to everything that can be rendered. The object then calls a method on this visitor with itself as the parameter - sending itself back to the visitor. Seems a bit pointless right? Well, the magic is that with polymorphic languages this causes different code paths to be executed, depending on the class.

Let's have a look at an example:

class MD2 {public:    void accept(RenderingVisitor rv) {        rv.visit(this);    }};class BSPModel {public:    void accept(RenderingVisitor rv) {        rv.visit(this);    }};class RenderingVisitor {public:     void visit(MD2* md2) {        // Render code specific to rendering MD2s here.    }        void visit(BSPModel* bsp) {        // Render code specific to rendering BSPModels here.    }};


Looks pretty cool so far! Now we have a lot of common functionality grouped together, we hopefully starts making our code more organised. We can go even further now by making both BSPModel & MD2 inherit from a base class, maybe Renderable. Then, we all our renderable objects in a list, and it's very easy to render them all now. Let's see that:

class Engine {private:    RenderingVisitor rv;    std::vector<Renderable*> renderableObjects;public:    void render() {        std::vector<Renderable*>::iterator = renderableObjects.begin();        Renderable* renderable;        while ((renderable = iterator.next()) {            renderable.accept(rv);        }    }};


You can go even further in some languages that allow for nice meta-programming (you might be able to do it in C++ with templates, but my knowledge of C++ is a bit rusty atm), like in Python you could add a class decorator to save you having to write that accept method.

I hope this sheds some light on your problem :)

[Edited by - Cycles on December 4, 2007 6:31:50 PM]
Oliver Charles (aka aCiD2) [ Cycles Blog ]
ldeej
Well I don't really got your idea. You mean every object have render attribute that the renderer grad and operate with it?
If yes than its sound pretty cool cause I can change the render method of object.

Cycles
Nice idea! Looks nice and pretty configurable.

Oh well there are another problem. If the renderable object registered in engines so called "render list" how should I operate with this object? I mean objects class can have functions to control objects animation (Like the design I did for MD2). How should I control the animation? Using some method in engine like getObject(ID).setAnimation(someAnim);? Or operating on the outside pointer?
Sec Ill write code.

--------main-------//Engine was inited alreadyMD2* model = new MD2("modelfile.md2", other parameters);int id = engine.RegisterObject(model);-------Option 1------engine.getObject(id)->setAnimation(someanim);engine.render(); //here we run through all render list---------Option 2------model->setAnim(someanim);engine.render(); //here we run through all render list


Hmm actually its quite obvious that easier and better to use option 2.

I am open to another solutions/support of already given solutions.
Well actually would be nice to know how "pro" do it, I mean how big engines designed, maybe someone know and ready to share.

Thanks again.

I would love to change the world, but they won’t give me the source code.

Quote:
I want to have some objects, actually not "few" but only 3. One object will be some basic things like cube, sphere, parallelepiped and etc. The second one going to be an MD2 (quake2 model format) object, and the last one is BSP (quake/half-life map format) object. The problem that everyone of them will have a different Render method. A cube will be rendered as vertices, MD2 will be rendered using OpenGL commands found inside the file itself, and BSP don't know yet :)

This is actually Bad Idea. You should, instead, convert those objects (or generate them) in a single format processed by your "engine." The reason is that your method does not scale well. As you need to add new effects, or slight modifications to the render pipeline (which will occur more and more frequently as your framework is used), you will need to replicate that functionality in a domain-specific fashion in multiple places. This is redundant, and makes for poor maintainance.

Selecting a single render path increases maintainability as changes to that path can affect all types of objects, allowing faster iteration. Futhermore, the code required to add a new type of object is simply the code to transform or generate an internal representation of that object. Domain-specific concepts are layered on top (for example, the BSP tree's culling routines are applied to select a set of internal primitive geometry to be submitted as any other geometry).

It is slightly more work up front, but only slightly. The benefit is worth it.
jpetrie
Well I agree with you, but those files was created to serve their needs. MD2 was made to hold Animated character format and BSP was made to hold level data format.
I can't convert one to another (well actually I can).
Of course quite possible that in future when Ill work on my own game I will develop my own file format, for now I'm learning the API (OpenGL).
Thanks again :)

I would love to change the world, but they won’t give me the source code.

I think he means internally, create your own engine format. Instead of cubes having the vertices directly, MD2's having the OGL commands, BSPs having etc etc, have one unified method of rendering. Instead of separate render classes, you now have separate loader classes, all of which load into a unified render class. This still won't scale well for larger projects (in a full game engine, your world is typically going to require custom code for stuff like culling, collision, etc, so it's not going to be as simple as rendering the whole mesh like you can with models), but it works well for starters.
All those file formats store things as triangles, one way or another. For example, the ,md2 file has a list of vertices, and a list of triangles. Each triangle has 3 vertex references. The same is true of the .bsp file format, which has a series of brushes each containing a list of triangles, and a series of brush references stating where each brush is to be drawn.

The cube is a seperate problem. You won't find yourself using cubes once you program your game to load meshes from file, so put that code separately.

Everything else uses triangles. Start from the bottom up, writing low level functions first.

Firstly, create a triangle class and program the renderer to receive a triangle and render it.

Then create a camera class, consisting of a position and a view vector and some methods to move it around. Program your renderer to receive a camera and set the view frustum accordingly.

Give your renderer class the ability to store different textures inside an std::map and write some methods to switch between the textures loaded. Writ a method to get the OpenGL texture ID number when you pass in a string.

Then modify your triangle class so that it has a texture ID. The renderer should switch between the textures it has loaded automatically. Your triangle class should use the get_tex_id method to choose which texture ID it has.

Then, give the renderer the ability to recieve an std::vector of triangles and sort-copy them into some kind of optimized storage structure, e.g. octrees. It would do this for all the triangles it recieves.

What you then do is pass in an std::vector or triangles and render them in the order they are received.

Then write methods inside the .md2 model class to pass the triangles to the renderer as an std::vector, and then remove all GL code from the MD2 class. Do the same for your other mesh formats.

So, your draw code would look like this:

renderer.apply_camera( character_1.get_camera() );
character_1.send_triangles( renderer );
bsp_level1.send_triangles ( renderer );

renderer.optimize_triangles();
renderer.render_scene();
renderer.flip_buffers();

I just wanted to see if he would actually do it. Also, this test will rule out any problems with system services.
Representation as "a list of triangles" is relatively uselss, as in a triangle class beyond some initial sanity checking that your render path is sane and pixels are appearing. Vertex and index buffers (or the appropriate GL concepts) are the way to go.

Additionally, "optimizing" the triangle list -- while a non-issue if you don't bother storing your data in a useless internal format -- should only be done once. Ideally, at asset prep time (during build).

The point of an internal representation of the data is to have a way to quickly and efficiently submit the data to the rendering API in the format the rendering API wants. This internal format typically consists of a vertex and index buffer, a shader, textures, and appropriate state. That's all.
Got you thanks :)
If someone have something to add, Ill glad to hear :)

I would love to change the world, but they won’t give me the source code.

This topic is closed to new replies.

Advertisement