Scene Graph/Renderer interface

Started by
7 comments, last by dmatter 18 years, 9 months ago
Foo. I'm begginning to find that much of the theory about scene graphs that you find on the net is great for drawing pretty graphs and giving presentations that make you sound smart, but very little of it actually applies in a useful manner. No one likes to talk about the nitty gritty (but very important!) details. For example, how does one (cleanly) get a scenegraph to work with a renderer in a nice, modular environment? The details: I've got a fairly nice scene graph set up, which manages our scene objects very nicely. Everything renderable or interact-able is plugged into it. Cameras, lights, static and dynamic meshs, transforms, etc. The basic code is pretty simple:

class CNode
{
public:
					CNode( void );
	virtual			~CNode( void );

	int				Attach( CNode* node );
	int				Remove( CNode* node );
	
	void			UpdateNode( void );
        void                    RenderNode( void );

private:
	virtual void	Update( void );
        virtual void	Render( void );

	CNode*			next;
	CNode*			child;

protected:
	uint			type;	
};

Everything that can be attached to the scene graph inherits this class. We attach nodes to eachother using the Attach function, and the Remove function removes any instances of the specified node that occur below the node it's passed into. (Most of the time the Root node) UpdateNode and RenderNode are simple recursive function calls that call their own Update/Render function, then that of their child node and finally their next node. (Basically it's a depth first tree) The render function is where our problems come to light, though. Up until recently, I had it so that each type of node was responsible for handling it's own rendering. It worked great except for one problem: This meant putting API calls (OpenGL in this case) directly into the nodes. So, for example, the Camera node would contain a direct call to glLoadMatrixf(). There are several problems with this approach: For one it tied the code directly and inseprably with a single rendering API. Though I have no need for it, I would like to leave the doors open for a DirectX implementation in the future. (Or even a software renderer if I got feeling REALLY ambitious! ^_^) It also meant that it was very difficult to track rendering states across the board, and even harder to insure that one objects render routine wasn't going to interfere with anothers. These problems, along with a few asthetic values and the prompting of some friends helping me with the code pushed me to change the rendering architecture to a more modular and independant design. So, here's our new Render interface (or at least a simplified version):

class CRenderWorld
{
public:
	virtual int		Create( HWND hWnd, CViewParams *params ) = NULL;
	virtual int		Destroy( void ) = NULL;
	
	virtual void	Begin( void ) = NULL;
	virtual void	End( void ) = NULL;
	
	virtual void	Render( CNode *scene ) = NULL;		

	virtual void	Flip( void ) = NULL;

protected:
	
	CCamera*		camera;
	CLight*		        light;
        //etc.....
};

The idea, obviously, is that we create one set of basic functions to interface with the rendering, and then create multiple render classes inherited from this to handle individual render pipelines. For example, I would have a CRenderWorldGL class that is an OpenGL implementation of this. To specify the exact renderer you simply say:

CRenderWorld *render = new CRenderWorldGL;
Pretty basic OOP stuff, yes? Okay, here comes the fun bits. First off: Obviously at this point we're stuck with providing a lot of new methods for each node to pass out the nessicary information to the renderer, which means that when programming new nodes you have to anticipate which information the renderer will need, which may be different from renderer to renderer. To an extent this is unavoidable with the OOP style of programming, but it's made more aggravating here by the needs of the system. Secondly: For every new node that is added, we now are faced with going into the core rendering code and adding new render routines to accomidate the new node type. In my opinion this kills a lot of the usefulness of the scene graph, since a big plus (in my mind) was the ability to add new items in a very simple and modular manner. In the end, I feel like I traded one set of problems for an entirely new set of them. I feel that there must be a better way of handling this, and am wondering how others with more complete rendering systems have handled this. Is there a more elegant way, or am I simply facing the cold hard reality of how it's done and just don't know it yet? (If you'd like any more details on how my code is structured just ask, I simply feel like this post is long enough as it it and don't want to scare anyone away ^_^)
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
Advertisement
This is a massive subject with many pro/con arguments to different implementations.

Probably the best reference I found while constructing my scene graph was here and is what my graph is now based on. It uses a Renderer interface so all low level calls are hidden from the main engine, and then the concept of Global states and local Effects to apply variation to model data.

The Book includes a full source and is very well written it helped me out a lot.
[happy coding]
Thanks for the suggestion. I've been eyeing that book for a while now, maybe I'll have to finally just go for it!

I realize this is a pretty big topic, and I don't honestly expect a quick concise reply that's going to solve all of my problems. Actually, the post above was more of a thereputic thing for me than anything else. I find that if I take the time to write out my problems in detail it helps me visualise it, and may bring out solutions I hadn't thought about till I put it all down in writing.

My little thinking process goes something like this: I'll usually stew about a problem for a few days, writing up little chunks of code to help me see how it all connects together. If I haven't found a solution by then, I begin writing a Gamedev post asking for help on the subject, trying to be as detailed as possible. Many times once the post has been written I've had this great new idea that I can go and fiddle with, in which case the post is scrapped and I go back to work. If I still am stumped, however, I click post and hope for the best.

A good 90% of the questions I write out never actually make it to the forum ^_^

Anyway, thanks again for the suggestion, and I'm still more than willing to hear any others that people have to offer! In the meantime, I've caught a notion or two that I'm going to play with...
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
Its very true that formulating a question sometimes becomes the answer itself!

After raving about that book, I should mention that it does have a number of things missing for a complete game such as integration of sound networking etc, have a look at the table of contents on Amazon. If you just want to look at the source go to http://www.geometrictools.com/ and pick through it, though its much easier with the book.
[happy coding]
(Nice book by the way, may have to read it myself)

Anyway, the solution to your problem is indeed to create a render class that can be inherited from for multiple platform/API render systems. Now you may think that there is a plethora of information that you have to store in your objects, but if you code it right, you only have to store the mesh and the textures, plus pointers to material. The example I will use here is my engine. Scene graph nodes in my engine store a mesh, their textures, and a pointer to the material which defines the way in which the mesh wants to look. The render device traverses my scene graph (in my implementation using both quad tree culling(for landscapes) and portal rendering(for indoor scenes)) and requests from each object a pointer to its geometry descriptors (a geom Desc is a collection of a mesh,texture set, and material, of course objects can return that they dont have any geometry information). It then stores the info in geom pots by shader. And at the end all the triangles are dropped to the shaders and everything is rendered. Now the only other thing, and the solution to your problem is this. The material defines a material type and a few parameters, which are then passed to the render device on initialization. The render device returns an ID of the shader that will fullfill the needs of the material. And this is stored in the material for later use by the render device. Thus all calls to libraries are now in a render system using pluggable shaders. This implementation is now extremely flexible as to what is done with data in the shaders, as well as cross platform/API by simply writing a new render class and shader set.

I hope this helps. I'm not sure if it's the best method to implement this functionality, but it seems to do an effective job for me.

- Matlock
Thanks Nihilistic!

That actually sound a lot like what I was thinking of doing with the exception that I was thinking about having the renderer (or at least a resource manager within it) store the actual data rather than the nodes themselves, which instead would store a handle to the appropriate dataset. Still haven't actually tried IMPLEMENTING it, but it sounds nice in theory! ^_^ (Doesn't it always?)

Still wide open for suggestions, though!

[Edited by - Toji on July 19, 2005 11:21:14 PM]
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
My approach is slightly different. I use the scenegraph purely to describe the scene data. The nodes do not even contain a Render function. Once the user has finished updating the graph and requests it to be rendered, the traversal function will sort the graph into a list of renderables (i.e. typically the geometries and lights in the scene). This sorting can be important as it allows you to group objects by texture or shader.

In the next step, this sorted list of objects is passed on to a pipeline of processor modules. (Processor is my own terminology; accepting suggestions for a better name). Each processor performs operations on the list of objects. The first processor is typically a culler that marks all objects out of the frustum as culled. The next processor could be an OpenGL (or Direct3D or software) renderer that uses a specific API to render the contents of the list.

In my library there are many more processors. For example: a processor for retrieving some statistics on the list (such as the number of triangles in it) that is useful for debugging, and a range of processors used for creating a distributed scenegraph over a set of PCs (I do parallel rendering).

This way you'll have a really flexible system that allows for very easy configuration (just choose the processors in the pipeline) that can be changed while running.

I'm not stating here that my solution is the best and without flaws; it's simply another option...

Tom

EDIT: One example of a drawback of this approach is that your second problem:
Quote:
For every new node that is added, we now are faced with going into the core rendering code and adding new render routines to accomidate the new node type. In my opinion this kills a lot of the usefulness of the scene graph, since a big plus (in my mind) was the ability to add new items in a very simple and modular manner.

is also true for my approach. However, I believe that if your node requires you to add/modify code in the rendering core, you would have had to implement it's rendering routines anyway, because it apparently has unique rendering behviour (if not, it could have been derived from an existing scene node class). Only the place where this rendering code is inserted is more ackward (far away from the object in the rendering core).
Hm... I like that idea very much! The whole "processor" idea ("Render Stage" maybe?) is intruiging. I'm thinking that if you could make the stages pretty generalized in their interfaces you could plug them into the render pipeline on the fly, allowing users to define new stages to go along with new geometry types. That probably isn't nearly as easy in code form as I made it sound (it never is), but it's worth looking into.

Oh, and just for kicks: One of the better scenegraph papers I've found thus far is This One. He sticks very close to the practical details, so anyone who's interested may want to take a look!
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
Quote:Original post by Toji
Hm... I like that idea very much! The whole "processor" idea ("Render Stage" maybe?) is intruiging. I'm thinking that if you could make the stages pretty generalized in their interfaces you could plug them into the render pipeline on the fly, allowing users to define new stages to go along with new geometry types. That probably isn't nearly as easy in code form as I made it sound (it never is), but it's worth looking into.

Thats how my engine works. I have 'Graph-operators' which all inherit from a base SG_Operator. These can be thown together in any way e.g. UpdateOp->CullOp->RenderOp.

Any given operator conforms to 1 of 4 different classifications of graph-operators. An operator is first classified by the type of dispatch: Static or Dynamic. Static dispatch is achieved through the visitor pattern and fixed at compile time, dynamic dispatch is done through functors and can be changed at run-time. Next an operator is classified by what it dispatched for: Per-nodeType or static(all-nodeTypes). Per-nodeType operators perform a different operation for each type of node (geometry, transform, etc), Static operators perform the same operation for every node (such as culling).

Creating new scene-operators is easy so new operations can be added without changing the node base-class or adding new virtual functions. Adding new nodes is possible but for static dispatch scene-operators new functions must be added (as is the major con of the visitor pattern) but the dynamic dispatch operators have no such problem provided new nodes register their own functors for each operation (unless they're anti-social :)

scene-operators can be removed from the pipeline and even replaced during run-time. The engine maintains one single pipeline of scene-operators but there's no reason why the user can't maintain a pipe-line themselves (for some neat tricks.

The RenderOp doesn't render geometry directly it batches it up, sorts it by material etc (all of which can be customized through functors and virtual functions) and then renders it. Static geometry is pre-processed so after some culling its all ready to go.

Hope that helps :)

[Edited by - dmatter on October 12, 2005 11:42:46 AM]

This topic is closed to new replies.

Advertisement