design of render() function

This topic is 4737 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I've recently been teaching myself DirectX and have run into a design issue that's stumped me. When I only want to render a single object (as I have been doing) I can load all my data into a vertex buffer and set stuff like transformations outside of the render function itself, allowing for a clean and simple render function that just presents my vertex buffer with the preset settings. If I'm displaying multiple objects on screen, I obviously want to be able to perform operations like rotations and texturing on each object separately. I've glanced at some tutorials and they seem to overcome this issue by expanding the render function: setting textures and transformations for one object, rendering it, and then switching settings for the next object. Expanding the render function will work with just a couple of objects, but both would result in an impractically large render function when using lots of objects, and would violate the principle that you want to keep your data and your display code as separate as possible. Any thoughts on how to solve this problem? Thanks.

Share on other sites
The way to do it does require you to reconfigure your device accordingly for each piece of geometry you render. There isn't really a way around this.

The best engines/code tend to minimize the amount of reconfiguring done by scheduling/batching calls in the most optimal order so as to maintain the best performance.

From a code design point of view, yes - seperating them is obviously a benefit. A lot of the necessary information will be loaded along with the geometry, and could be contained in a struct/class, and animated/modified as appropriate..

If you can do this, then your render code collapses down to a simple loop:

For( all objects in the 'world' )    If( this object is visible )        Configure the device for this object        Render This object    End IfNext Object

A trivial implementation would be to have a structure containing the materials, textures, settings and mesh data for each object. Then, for each loop calling the necessary SetTexture(), SetMaterial() and SetRenderState() (etc..) then issuing a DrawSubset()/Draw*() call..

hth
Jack

Share on other sites
Quote:
 Original post by jollyjeffersThe best engines/code tend to minimize the amount of reconfiguring done by scheduling/batching calls in the most optimal order so as to maintain the best performance.

I have found that the easiest way to do this is to make a scenegraph that has a tree-like structure, categorizing each entity based on its material. For clarification, a material consists of shaders, constants, and textures. So, you organize everything in a tree like this:

                     Shader1                            Shader2               |--------|--------|                 |--------|--------|           TextureSet1      TextureSet2       TextureSet3        TextureSet4       |--------|--------|      ...               ...               ...   ConstantSet1      ConstantSet2       |                 |     Entity1          Entity5     Entity2          Entity6     Entity3     Entity4

Then, you simply traverse the tree and make state changes when you change leaves. It's pretty simple in nature.

Share on other sites
Generally what you need to do is integrate your rendering code and storage into obect classes that contain the code and data necessary to render themselves. Normally I would start off by creating an inheritable base class that contains virtual functions to be called when updating object positions, and when rendering the object:

class baseObj{public:   baseObj(){};   virtual ~baseObj(){};   virtual OnMove(float frameTime){};   virtual OnRender(IDirect3DDevice9 *pDev){};}

Then you can derive from the base object any number of times to create object classes with different rendering and update code:

class monster : public baseObj{public:   monster(ID3DXMESH *pMesh) {      pMesh->AddRef();      m_pMesh->pMesh; };   ~monster() {if (m_pMesh) m_pMesh->Release();};   OnMove(float frameTime) {      // perform monster AI      // update monster position };   OnRender(IIdrect3DDevice9 *pDev) {      // set object's world transform      // set texture and other states...      m_pMesh->DrawSubset(...   };};...class tree : public baseObj...class rock : public baseObj...etc.

Since objects are derived from a virtual base class, you can then have a collection of objects that can be stored as pointers to the base object class, but which will call the OnMove and OnRender functions of the actual class created. For example:

baseObj *pObjs[3];pObjs[0]=new monster(...pObjs[1]=new rock(...pObjs[2]=new rock(......for (i=0;i<3;i++)   pObjs->OnRender(pDev);

would call monster::OnRender to be called on the first object, and rock::onRender for the second and third objects.

Edited by Coder: Used source tags. Check GDNet Forums FAQ

[Edited by - Coder on May 23, 2005 10:19:07 AM]

Share on other sites
Quote:
 Original post by rdunlop...

Hi there, welcome to the DirectX forum [smile]

Anyways, isn't that against the entire principle of conserving state changes? If each item sets all of it's properties, including textures, render states, shader constants, ect..., then performance will undoubtedly suffer.

If you categorize and organize all of your objects in a tree, then you can really minimize the number of state changes. It is more complex than implementing a simple render function, but well worth it if you are making a larger application.

Share on other sites
Quote:
 I have found that the easiest way to do this is to make a scenegraph that has a tree-like structure, categorizing each entity based on its material.

Agreed. Bringing design into the picture, this is how we chose to implement this concept in our project:
Each object implements an interface that describes its material:
struct IRenderable {  virtual const Shader& get_shader() = 0;  virtual const Texture& get_texture() = 0;  virtual const ParamList& get_params() = 0;};

Next, we create a "render manager" entity whose sole job is to hold a list of each IRenderable and sort them based on these three criteria. When you iterate over this sorted list of renderable objects, examine the change in state required by each object and change the least amount possible:
const Shader* last_shader = 0;const Texture* last_texture = 0;const ParamList* last_param_list = 0;for(/* each renderable obj */) {  if (last_shader != &obj->get_shader()) {    obj.get_shader().setup();    last_shader = &obj.get_shader();  }  // again, for Textures  // again, for Parameters  obj->render();   // now that everything is setup}

You don't even have to sort() every frame if you can impose a rule that IRenderable's need to tell the "render manager" whenever they make a change that could change their ordering.

[Edited by - andhow on May 24, 2005 7:17:19 AM]

Share on other sites
Just a summary of what has been said above really:

1) The objects should know how to draw themselves; this is one of the basic principles of OO design and ecapsulation. Now instead of having a large function with lots of if else statements (or even swtich) for each object you simply loop over the models (which may or may not be in a scene graph depending on how you have done things) and call the draw method for each one. The easiest way to do this is to inherit from a common base class as rdunlop mentioned.

2) Once you have got this working you can implement a render class. To do this simply do an initial "Render" call over each model. Instead of actually drawing the model this will simply put the model in a render queue (which belongs to the render calss) (if it requires drawing) along with the state changes required to render this model. The render class then has the job of sorting them in the most efficient manne, based on these state changes, and then calls the "draw" method for each object in turn.

I am in the process of attempting to determine a "magic number" based on the textures, shaders etc a model uses. This number will then be used by the render class to sort the objects, ensuring that the state changes are kept to a minimun (an object that uses the same textures and shaders will have the same magic number and therefore will be sorted next to each other in the queue, as is required).

Hope this helps,

Matt

Share on other sites
Many would disagree with point #1. A game object in most cases is logically seperated from the renderable object that represents it. You're free to implement things however you want, but almost always they will be seperate objects(in a decently designed engine).

Disregard if you meant the 'renderable' objects should know how to draw themselves, as opposed to the actual game objects(such as the player class, weapon class,...) that contain the logic for the objects themselves.

Share on other sites
Nice thing such a topic already exists, because I have a question regarding this.
Let´s say I want to render about 50 trees or something, randomly spread over the scene and each consisting of about 500 polys (or any other number less than 1000?). How would I put that into one batch? I don´t think it would be efficient to render every tree by itself, which would result in 50 draw-calls. But I only see two options:

a) Render each tree by itself to be able to set a transformation and rotation matrix for every tree (as was stated before).
b) Transform the vertices "manually" and copy them all over into one vertex buffer and render the whole stuff all together, only for objects with same texture/shader combination. But that would result in one lock()-copy-unlock() step every frame for every texture/shader combination, which doesn´t seem to be ideal, too.

So, how would I batch such things? Would I batch it?

Share on other sites
matches81 wrote:
>>So, how would I batch such things? Would I batch it?

I'd use D3D9's Instancing.

• 19
• 10
• 19
• 14
• 19