# Scene Graph Rendering - Centralized, Decentralized?

## Recommended Posts

Okay, so we have lots of little nodelets, lol, I like that word, which ultimately call something's draw function. Is this draw function specific to the object we're working with? Do we send this object to a centralized renderer? How can you centralize a renderer without type switching? How can you decentralize a renderer and still have global effects like pixel shaders and global illumination? Pro's and cons of both? What would you suggest? What should I do?

##### Share on other sites
No replies?

Personally, I treat nodes as data. In their draw() they take their specific data [model, texture, whatever] and push it to a central renderer which does the ordering/batching/rendering. Granted, my setup is pretty simple, and doesn't need stuff that requires that 'big picture' look at things.

I'm sure others have better advice, but maybe this can help get the ideas going until they offer it...

##### Share on other sites
I was thinking about having all the nodes point to the data structures. All the nodes would contain a pointer to the object to draw and a transformation (and maybe more). That way I don't have to copy the data for each instance.

I'm still not sure what would be best... the data to have a pointer to the renderer and draw itself or send the object to the renderer. The most logical way seems to be the object that draws itself sing the renderer... but it adds a dependency in the architecture.

JFF

##### Share on other sites
Quote:
 Is this draw function specific to the object we're working with?

To the class, not the object instance itself. Presumably the draw method is a virtual function of the base node class, and subclass nodes override it as needed, though this is certainly not the only way to do it.

Quote:
 Do we send this object to a centralized renderer?

Yes or no, whichever you prefer. I tend to prefer yes, for the purposes of abstraction.

Quote:
 How can you centralize a renderer without type switching?

By properly employing polymorphic techniques provided by the language, such as virtuality of methods. Also, there are ultimately very few things a renderer actually needs to render (usually 2D geometry, 3D geometry, particles or other effects, et cetera). The renderer doesn't need to know that rendering a "missle" is different from rendering a "player" (indeed, it probably isn't different at all).

Quote:
 How can you decentralize a renderer and still have global effects like pixel shaders and global illumination?

Rather easily. Shaders and illumination have nothing to do, really, with the "centralization" or lack thereof of a renderer. You don't render shaders or lighting, both are used to alter the appearance of what you are rendering. They are render state, much like textures, that belong with the geometry or scene graph node (whatever) and are set and cleared by the renderer as needed (prior to rendering a given chunk of geometry, usually).

Quote:
 What would you suggest? What should I do?

As I mentioned in another thread of yours, experimentation is one of the best ways to learn. Pick a particular method and try it. See how it feels, see how it runs. Do not be afraid to refactor your code. I guarantee you will not write it "right" the first time anyway.

Google, or books, can help answer more specific questions about what particular techniques are "generally fast" and what their various advantages and disadvantages are. The forums or the #gamedev IRC channel are also a good resource, but you'll need to be a bit more specific than you have in this particular post. As it stands your question is far to general for me to answer, even if I did feel like typing that much.

##### Share on other sites
Well, right now I have a system where each object I draw, simply draws itself, we kind of do some fake polymorphism techniques by generalizing a Draw() function in every renderable object, and then by using the pointer we used to create the geometry node, whatever it was, we call Draw()... Now I know this isn't a very elegant approach, and I like the idea of a centralized renderer where we send these geometry objects (or pieces of data).

The polymorphism involved in creating a centralized singleton renderer without type switching is what is really baffling me.

Do we need some kind of other data tree with virtualized render functions to keep track of the rendering methods for various objects?

Say, class renderableObj

Under that we might have class, ms3dmode, which is a public renderable object..

ETC ETC, until we get the level of polymorphism our engine needs.

For instance, we could have a Base scene graph node Geometry..

Geometry would have an update function that either A, virtualizes an empty rendering function (decentralized), or B, virtualizes a function that sends data to the renderer...(centralized)

Under geometry, we could have MS3DModel, which is a public Geometry..

This guy needs to override the virtual function of geometry, and draw itself in it's own fashion (decentralized), or override the functiont that sends data to the renderer, (what data? object info, what else, how does the renderer draw this thing properly) (Centralized)

Can you understand what I'm having trouble dealing with?

How does an abstract hierarchy of renderable objects translate into what rendering function gets called in the renderer?

[Edited by - Shamino on March 16, 2006 12:17:07 AM]

##### Share on other sites
A problem which sooner or later arises is keeping state switches and batches to a minimum. For example you got 100 objects of which each one uses one of 3 different textures. If each object takes care of rendering itself, they'll probably switch textures far too often.

In this case, you'd want to render all objects using texture1 first, then those with texture2 and so on. By using a centralized renderer or renderqueue you can sort the objects by their properties and use as few batches as possible. Basically each worldbobject would have a virtual method like SendToQueue() which says: "Render me at this position using the tree texture, that dog mesh, some cool parallax shader! I don't care how u do it, just do it and i'm happy!" and the centralized renderer will take care of sorting and rendering.

Another possible way would be having a tree structure in parallel to the scene-tree where each worldobject is put in according to its visual appearance. But i'm not used to that.

Take my ramblings with a grain of salt. [smile] Oh, and i just assumed that you're using d3d.

##### Share on other sites
Quote:
 Original post by ShaminoHow does an abstract hierarchy of renderable objects translate into what rendering function gets called in the renderer?

The central renderer may provide some few routines for rendering general geometry given in the one or other form. The Renderable::render() method is virtual polymorph and invokes the one Renderer::renderXY(...) method that is suitable for it effective type (say how its geometry is given). (Perhaps there are more than one method of the Renderer to invoke, but that doesn't play a role for the understanding.)

So you have both a central Renderer but also a decentral type dispatching. That doesn't contradict itself.

##### Share on other sites
I think you're getting caught up in terminology and its confusing you.

Your MS3D model class shouldn't override its base (Geometry) class's Draw method, because drawing an MS3D model is not fundamentally different from drawing any other kind of model or geometric objects -- the Geometry base class should contain some vertex or index buffers and those should be submitted to the renderer, along with any state information carried around by the geometry.

I don't understand what you mean by "fake polymorphism" in your scene graph.
Can show provide code?

Quote:
 Oh, and i just assumed that you're using d3d.

IIRC he's using OpenGL but nothing in your post is really D3D-specific. *shrug*

Quote:
 centralized singleton renderer without type switching is what is really baffling me.

Naughty! The render does not need to be, and should not be, a singleton.
Also, beware: You don't want to perform any type switching manually at all, ever. There are not many cases where you need to; certainly not in a renderer.
What I mean my "manually" is using lots of dynamic_casts and/or writing code like:
switch(myObject->getType()){  case fooType: ... break;  case barType: ... break;}

I'm out of time, I'll continue my discussion after work.

##### Share on other sites
Okay, so we don't want to do type switching, no cases, no state machine.

But we want something centralized, I'm having a hard time figuring out how this is possible..

I can understand the decentralized method perfectly.

But it seems to me like centralizing it requires some type of polymorphism I am not yet familiar with, or some type of case/switch/if/else pile of mess in the renderer...

The main reason I think of seperating my ms3d geometry, and making it a subclass to geometry, is because ms3d model files are indexed, and they don't repeat vertices or anything.. I'm not sure if this lends itself well to vertex buffer objects or vertex arrays..

I think I'm going to go with a decentralized method anyways, it seems overall less confusing.

What should the base renderable object node look like? I heard something about a vertex array or something, what what huh? I really never got into opengl this seriously before lol.. I know how to glbegin and glend, but vertex arrays and vbo's are foreign to me.

What should I derive from the base class? Under the base renderable object node, what should be under that? Examples? Anything?

By fake polymorphism, I mean this...

	void Update()	{		if(geom)		{					geom->Draw();		}		else		{			// eep! all shared_ptr's deleted!			//			// Reload Geometry, or take other measures		}		CSceneNode::Update();	}

This is just assuming that geom has a draw method in it, whatever geom is, it doesn't have a common base class, it isn't overriding any virtual methods, etc etc.. It is kinda faking polymorphism. I also heard it being called parametric polymorphism.

But anyways, my main questions have to do with the setup of geometry nodes in a decentralized renderer, I think this is the way I want to do it.

##### Share on other sites
I'm totally supposed to be at work, but I wanted to drop in and mention that one of the benefits of centralizing the renderer (so that its the renderer that draws things versus the objects drawing themselves) it that it reduces the number of places you'll need to refactor if you change your rendering "style" as a whole (i.e., from glBeing/glEnd to using VBOs, which you MUST do eventually because glBegin/glEnd is too slow.

##### Share on other sites
My experience has taught me:

Object don't know about the renderer as such; they just know about Geometries. Geometries could be meshes, or procedural vertex buffers, or whatever. If the object is a particle system, it would lock the geometry, write vertices to it, and unlock it.

The scene graph nodes are a combination of transform state, material state, and geometries. The scene graph collects the nodes that need to render, and then hands the tuples of (transform,material,geometry) to the renderer, which actually renders.

If you need to support various kinds of lighting models, transparency sorting, etc, then there are two ways of doing this. Either traverse the scene graph many times; once per pass kind (skybox, near-to-far Z, opaque-per-light, far-to-near transparency for example). Or collect all the data, and hand it to the renderer; the renderer will use the material information to figure out what the passes are and when to render the things.

For me, shaders are not something that specifies the "color" of each object. Instead, shaders define the lighting model. Different objects can have different lighting models, but there are "few" shaders in the system compared to "many" objects. Some lighting models provide lots of customization with parameters and maps, of course. This way, each lighting model will tie to one or more passes in the rendering, and your geometry will automatically be sorted by shader (which is one of the most important sorting criteria these days).

##### Share on other sites
hplus0603 touched on many of the issues I was going to, so I'll just add this:

Quote:
 But it seems to me like centralizing it requires some type of polymorphism I am not yet familiar with, or some type of case/switch/if/else pile of mess in the renderer...

It doesn't require that at all... if done correctly.

Quote:
 The main reason I think of seperating my ms3d geometry, and making it a subclass to geometry, is because ms3d model files are indexed, and they don't repeat vertices or anything.. I'm not sure if this lends itself well to vertex buffer objects or vertex arrays..

But answer me this: how many times, on average, do you expect to call your "load geometry" functions during the course of your game? How many times, on average, do you expect to call your "render geometry" functions during the course of the game? Most assuredly the latter will be higher, consequently you should optimize for that case.

A lot of "how load model format N" tutorials show you how to load and create was is basically an in-memory duplicate of the organization of the model's data on-disk. This is the strategy you are employing, it seems. There are two problems with this. I'll get to those.

What you should probably do instead is translate the model data, during load, into your own format optimized for however you are going to render *geometry in general*. For example, for me, this is basically a collection of index and vertex buffers. You may want to use a std::vector of indices and vertices for now so you can continue using glBegin()/glEnd() until you are comfortable with better methods of draw submission (this is why centralizing would be a good thing; less places to change the code).

If you continue to keep different storage formats for each model format you support, you'll run into the aforementioned two problems: first, to add a new model format you not only have to write loading code, but also rendering code. If your loading code instead translated to a format common to all geometry in your engine, you'd just have to write the loading.

The second problem is more important: very often, the format geometry data is stored in is not optimal for rendering purposes. This means you can end up rendering slower than you could, due to various things like poorly batched geometry or poor (or nonexistent) state sorting leading to extensive state changes.

##### Share on other sites
jpetrie I believe you've just touched on what is the bane of all my existance..

The reason I feel like, all this crazy polymorphism is needed, or maybe even some icky type switching, is basically because my geometry drawing method for ms3d models is completely different from everything else. This I am assuming will have to change.

I have to load ms3d geometry into a vertex array somehow, even though it doesn't repeat vertices, or wait, does this mean I can use triangle_fans?

Hmm. If all geometry draws the same pretty much, the need for type switching/elaborate polymorphism fixes is eliminated, right now, all geometry draws differently in my engine.

Hm, this clears up alot... I guess I've forgotten that all geometry is virtually the same.

My next step will be (after I fix my texturing), to make it so I can draw all geometry in the same way, with vertex buffer objects (if supported), or else just use vertex arrays.

##### Share on other sites
I'm not familiar with the format in question, so I'll assume "doesn't repeat vertices" means that it uses some kind of vertex indexing system (wise) with either one triangle list per material or 1-n triangle strips per material. You just load (converting to float if necessary) the vertex data to a vertex buffer (D3D) or a VA/CVA/VAR/VBO (OpenGL) (Your renderer should abstract these kinds of hardware buffers anyway). Then you load the indexes to an another buffer and draw it the way it should be (most likely triangle lists).

If it is triangle strips, unrolling the indexes to be a triangle list keeping vertex cache coherency in mind is likely to improve vertex performance on relatively modern (that is under 5 years old) hardware.

##### Share on other sites
I realize what I gotta do now..

I gotta load the ms3d file into my own format..

I gotta modify this structural code.

class MS3DModel{public:	MS3DModel();	virtual ~MS3DModel();	struct Vertex	{		char BoneID;		float Location[3];	};	int NumVertices;	Vertex *Vertices;	struct Triangle	{		float VertexNormals[3][3];		float Textures1[3], Textures2[3];		int VertexIndices[3];	};	int NumTriangles;	Triangle *Triangles;	struct Mesh	{		int MaterialIndex;		int NumTriangles;		int *TriangleIndices;	};	int NumMeshes;	Mesh *Meshes;	struct Material	{		float Ambient[4], Diffuse[4], Specular[4], Emissive[4];		float Shininess;		GLuint Texture;		char *TextureFilename;	};	int NumMaterials;	Material *Materials;	bool Load( const std::string & name );	void ReloadTextures();	void Draw();};#endif

So basically, I gotta cut out that triangle definition, gl doesn't need to know what a triangle is, all the normal, tex coord, color, and vertice information needs to go into one struct..

Then we need a vector of indices.. Which we can copy to the VA we create...

##### Share on other sites
I chose a little odd approach.

First off, I believe in the "multiple passes approach", eg, the programmer/designer/artist is responsible for tagging objects, whether they are transparent, or are near or far away in the background. These tags are attached to geometry, which in my world are the triangle meshes, or other things that submit vertices to the rasterizer. The renderer then uses these tags to decide what to do.

At rendering time, the renderer gathers these geometry nodes, and performs appropriate pre-processing, such as depth sorting, where required, then the nodes are drawn. In the appropriate order.

The scene graph is a tree. Each geometry node is typically a leaf, and each of its parent nodes contains some form of OpenGL state information, such as material, texture, blending and depth buffer modes, and transformations. These higher nodes are applied before rendering the geometry.

If the order of drawing the geometry does not matter, the geometry can be sorted to draw with the fewest state changes possible. This is only possible for solid objects, as transparent stuff generally needs to be drawn back-to-front. But even so, it is possible to figure out non-overlapping transparent objects, for which the drawing order is again irrelevant, and some sort of state-change optimisation can be done.

My nodes are all primitive, eg. they only contain a certain kind of information. A matrix node contains just a matrix, material node just colour etc. This should allow for simple tree reordering should the need arise. These nodes work like a stack, at rendering time, eg. their data is applied from root to leaf, each time adding a bit of info, possibly changing previous values, the final step being submitting the vertices.

As for importing geometry from files, I don't create a special node for each format, just an importer that converts it to whatever format my geometry nodes use. This keeps things simple.

Now, this system might not be the simplest, but it is very flexible, adding shaders is just a matter of introducing a new node type for it. The nodes must just adhere to the interface dictated by the renderer.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628403
• Total Posts
2982477

• 9
• 10
• 9
• 21
• 24