Sign in to follow this  
Toji

OpenGL Scene Graph/Renderer interface

Recommended Posts

Toji    535
Foo. I'm begginning to find that much of the theory about scene graphs that you find on the net is great for drawing pretty graphs and giving presentations that make you sound smart, but very little of it actually applies in a useful manner. No one likes to talk about the nitty gritty (but very important!) details. For example, how does one (cleanly) get a scenegraph to work with a renderer in a nice, modular environment? The details: I've got a fairly nice scene graph set up, which manages our scene objects very nicely. Everything renderable or interact-able is plugged into it. Cameras, lights, static and dynamic meshs, transforms, etc. The basic code is pretty simple:
class CNode
{
public:
					CNode( void );
	virtual			~CNode( void );

	int				Attach( CNode* node );
	int				Remove( CNode* node );
	
	void			UpdateNode( void );
        void                    RenderNode( void );

private:
	virtual void	Update( void );
        virtual void	Render( void );

	CNode*			next;
	CNode*			child;

protected:
	uint			type;	
};

Everything that can be attached to the scene graph inherits this class. We attach nodes to eachother using the Attach function, and the Remove function removes any instances of the specified node that occur below the node it's passed into. (Most of the time the Root node) UpdateNode and RenderNode are simple recursive function calls that call their own Update/Render function, then that of their child node and finally their next node. (Basically it's a depth first tree) The render function is where our problems come to light, though. Up until recently, I had it so that each type of node was responsible for handling it's own rendering. It worked great except for one problem: This meant putting API calls (OpenGL in this case) directly into the nodes. So, for example, the Camera node would contain a direct call to glLoadMatrixf(). There are several problems with this approach: For one it tied the code directly and inseprably with a single rendering API. Though I have no need for it, I would like to leave the doors open for a DirectX implementation in the future. (Or even a software renderer if I got feeling REALLY ambitious! ^_^) It also meant that it was very difficult to track rendering states across the board, and even harder to insure that one objects render routine wasn't going to interfere with anothers. These problems, along with a few asthetic values and the prompting of some friends helping me with the code pushed me to change the rendering architecture to a more modular and independant design. So, here's our new Render interface (or at least a simplified version):
class CRenderWorld
{
public:
	virtual int		Create( HWND hWnd, CViewParams *params ) = NULL;
	virtual int		Destroy( void ) = NULL;
	
	virtual void	Begin( void ) = NULL;
	virtual void	End( void ) = NULL;
	
	virtual void	Render( CNode *scene ) = NULL;		

	virtual void	Flip( void ) = NULL;

protected:
	
	CCamera*		camera;
	CLight*		        light;
        //etc.....
};

The idea, obviously, is that we create one set of basic functions to interface with the rendering, and then create multiple render classes inherited from this to handle individual render pipelines. For example, I would have a CRenderWorldGL class that is an OpenGL implementation of this. To specify the exact renderer you simply say:
CRenderWorld *render = new CRenderWorldGL;
Pretty basic OOP stuff, yes? Okay, here comes the fun bits. First off: Obviously at this point we're stuck with providing a lot of new methods for each node to pass out the nessicary information to the renderer, which means that when programming new nodes you have to anticipate which information the renderer will need, which may be different from renderer to renderer. To an extent this is unavoidable with the OOP style of programming, but it's made more aggravating here by the needs of the system. Secondly: For every new node that is added, we now are faced with going into the core rendering code and adding new render routines to accomidate the new node type. In my opinion this kills a lot of the usefulness of the scene graph, since a big plus (in my mind) was the ability to add new items in a very simple and modular manner. In the end, I feel like I traded one set of problems for an entirely new set of them. I feel that there must be a better way of handling this, and am wondering how others with more complete rendering systems have handled this. Is there a more elegant way, or am I simply facing the cold hard reality of how it's done and just don't know it yet? (If you'd like any more details on how my code is structured just ask, I simply feel like this post is long enough as it it and don't want to scare anyone away ^_^)

Share this post


Link to post
Share on other sites
Structure    240
This is a massive subject with many pro/con arguments to different implementations.

Probably the best reference I found while constructing my scene graph was here and is what my graph is now based on. It uses a Renderer interface so all low level calls are hidden from the main engine, and then the concept of Global states and local Effects to apply variation to model data.

The Book includes a full source and is very well written it helped me out a lot.

Share this post


Link to post
Share on other sites
Toji    535
Thanks for the suggestion. I've been eyeing that book for a while now, maybe I'll have to finally just go for it!

I realize this is a pretty big topic, and I don't honestly expect a quick concise reply that's going to solve all of my problems. Actually, the post above was more of a thereputic thing for me than anything else. I find that if I take the time to write out my problems in detail it helps me visualise it, and may bring out solutions I hadn't thought about till I put it all down in writing.

My little thinking process goes something like this: I'll usually stew about a problem for a few days, writing up little chunks of code to help me see how it all connects together. If I haven't found a solution by then, I begin writing a Gamedev post asking for help on the subject, trying to be as detailed as possible. Many times once the post has been written I've had this great new idea that I can go and fiddle with, in which case the post is scrapped and I go back to work. If I still am stumped, however, I click post and hope for the best.

A good 90% of the questions I write out never actually make it to the forum ^_^

Anyway, thanks again for the suggestion, and I'm still more than willing to hear any others that people have to offer! In the meantime, I've caught a notion or two that I'm going to play with...

Share this post


Link to post
Share on other sites
Structure    240
Its very true that formulating a question sometimes becomes the answer itself!

After raving about that book, I should mention that it does have a number of things missing for a complete game such as integration of sound networking etc, have a look at the table of contents on Amazon. If you just want to look at the source go to http://www.geometrictools.com/ and pick through it, though its much easier with the book.

Share this post


Link to post
Share on other sites
(Nice book by the way, may have to read it myself)

Anyway, the solution to your problem is indeed to create a render class that can be inherited from for multiple platform/API render systems. Now you may think that there is a plethora of information that you have to store in your objects, but if you code it right, you only have to store the mesh and the textures, plus pointers to material. The example I will use here is my engine. Scene graph nodes in my engine store a mesh, their textures, and a pointer to the material which defines the way in which the mesh wants to look. The render device traverses my scene graph (in my implementation using both quad tree culling(for landscapes) and portal rendering(for indoor scenes)) and requests from each object a pointer to its geometry descriptors (a geom Desc is a collection of a mesh,texture set, and material, of course objects can return that they dont have any geometry information). It then stores the info in geom pots by shader. And at the end all the triangles are dropped to the shaders and everything is rendered. Now the only other thing, and the solution to your problem is this. The material defines a material type and a few parameters, which are then passed to the render device on initialization. The render device returns an ID of the shader that will fullfill the needs of the material. And this is stored in the material for later use by the render device. Thus all calls to libraries are now in a render system using pluggable shaders. This implementation is now extremely flexible as to what is done with data in the shaders, as well as cross platform/API by simply writing a new render class and shader set.

I hope this helps. I'm not sure if it's the best method to implement this functionality, but it seems to do an effective job for me.

- Matlock

Share this post


Link to post
Share on other sites
Toji    535
Thanks Nihilistic!

That actually sound a lot like what I was thinking of doing with the exception that I was thinking about having the renderer (or at least a resource manager within it) store the actual data rather than the nodes themselves, which instead would store a handle to the appropriate dataset. Still haven't actually tried IMPLEMENTING it, but it sounds nice in theory! ^_^ (Doesn't it always?)

Still wide open for suggestions, though!

[Edited by - Toji on July 19, 2005 11:21:14 PM]

Share this post


Link to post
Share on other sites
dimebolt    440
My approach is slightly different. I use the scenegraph purely to describe the scene data. The nodes do not even contain a Render function. Once the user has finished updating the graph and requests it to be rendered, the traversal function will sort the graph into a list of renderables (i.e. typically the geometries and lights in the scene). This sorting can be important as it allows you to group objects by texture or shader.

In the next step, this sorted list of objects is passed on to a pipeline of processor modules. (Processor is my own terminology; accepting suggestions for a better name). Each processor performs operations on the list of objects. The first processor is typically a culler that marks all objects out of the frustum as culled. The next processor could be an OpenGL (or Direct3D or software) renderer that uses a specific API to render the contents of the list.

In my library there are many more processors. For example: a processor for retrieving some statistics on the list (such as the number of triangles in it) that is useful for debugging, and a range of processors used for creating a distributed scenegraph over a set of PCs (I do parallel rendering).

This way you'll have a really flexible system that allows for very easy configuration (just choose the processors in the pipeline) that can be changed while running.

I'm not stating here that my solution is the best and without flaws; it's simply another option...

Tom

EDIT: One example of a drawback of this approach is that your second problem:
Quote:

For every new node that is added, we now are faced with going into the core rendering code and adding new render routines to accomidate the new node type. In my opinion this kills a lot of the usefulness of the scene graph, since a big plus (in my mind) was the ability to add new items in a very simple and modular manner.

is also true for my approach. However, I believe that if your node requires you to add/modify code in the rendering core, you would have had to implement it's rendering routines anyway, because it apparently has unique rendering behviour (if not, it could have been derived from an existing scene node class). Only the place where this rendering code is inserted is more ackward (far away from the object in the rendering core).

Share this post


Link to post
Share on other sites
Toji    535
Hm... I like that idea very much! The whole "processor" idea ("Render Stage" maybe?) is intruiging. I'm thinking that if you could make the stages pretty generalized in their interfaces you could plug them into the render pipeline on the fly, allowing users to define new stages to go along with new geometry types. That probably isn't nearly as easy in code form as I made it sound (it never is), but it's worth looking into.

Oh, and just for kicks: One of the better scenegraph papers I've found thus far is This One. He sticks very close to the practical details, so anyone who's interested may want to take a look!

Share this post


Link to post
Share on other sites
dmatter    4826
Quote:
Original post by Toji
Hm... I like that idea very much! The whole "processor" idea ("Render Stage" maybe?) is intruiging. I'm thinking that if you could make the stages pretty generalized in their interfaces you could plug them into the render pipeline on the fly, allowing users to define new stages to go along with new geometry types. That probably isn't nearly as easy in code form as I made it sound (it never is), but it's worth looking into.

Thats how my engine works. I have 'Graph-operators' which all inherit from a base SG_Operator. These can be thown together in any way e.g. UpdateOp->CullOp->RenderOp.

Any given operator conforms to 1 of 4 different classifications of graph-operators. An operator is first classified by the type of dispatch: Static or Dynamic. Static dispatch is achieved through the visitor pattern and fixed at compile time, dynamic dispatch is done through functors and can be changed at run-time. Next an operator is classified by what it dispatched for: Per-nodeType or static(all-nodeTypes). Per-nodeType operators perform a different operation for each type of node (geometry, transform, etc), Static operators perform the same operation for every node (such as culling).

Creating new scene-operators is easy so new operations can be added without changing the node base-class or adding new virtual functions. Adding new nodes is possible but for static dispatch scene-operators new functions must be added (as is the major con of the visitor pattern) but the dynamic dispatch operators have no such problem provided new nodes register their own functors for each operation (unless they're anti-social :)

scene-operators can be removed from the pipeline and even replaced during run-time. The engine maintains one single pipeline of scene-operators but there's no reason why the user can't maintain a pipe-line themselves (for some neat tricks.

The RenderOp doesn't render geometry directly it batches it up, sorts it by material etc (all of which can be customized through functors and virtual functions) and then renders it. Static geometry is pre-processed so after some culling its all ready to go.

Hope that helps :)

[Edited by - dmatter on October 12, 2005 11:42:46 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now