Sign in to follow this  
Toji

OpenGL Scene Graph/Renderer interface

Recommended Posts

Foo. I'm begginning to find that much of the theory about scene graphs that you find on the net is great for drawing pretty graphs and giving presentations that make you sound smart, but very little of it actually applies in a useful manner. No one likes to talk about the nitty gritty (but very important!) details. For example, how does one (cleanly) get a scenegraph to work with a renderer in a nice, modular environment? The details: I've got a fairly nice scene graph set up, which manages our scene objects very nicely. Everything renderable or interact-able is plugged into it. Cameras, lights, static and dynamic meshs, transforms, etc. The basic code is pretty simple:
class CNode
{
public:
					CNode( void );
	virtual			~CNode( void );

	int				Attach( CNode* node );
	int				Remove( CNode* node );
	
	void			UpdateNode( void );
        void                    RenderNode( void );

private:
	virtual void	Update( void );
        virtual void	Render( void );

	CNode*			next;
	CNode*			child;

protected:
	uint			type;	
};

Everything that can be attached to the scene graph inherits this class. We attach nodes to eachother using the Attach function, and the Remove function removes any instances of the specified node that occur below the node it's passed into. (Most of the time the Root node) UpdateNode and RenderNode are simple recursive function calls that call their own Update/Render function, then that of their child node and finally their next node. (Basically it's a depth first tree) The render function is where our problems come to light, though. Up until recently, I had it so that each type of node was responsible for handling it's own rendering. It worked great except for one problem: This meant putting API calls (OpenGL in this case) directly into the nodes. So, for example, the Camera node would contain a direct call to glLoadMatrixf(). There are several problems with this approach: For one it tied the code directly and inseprably with a single rendering API. Though I have no need for it, I would like to leave the doors open for a DirectX implementation in the future. (Or even a software renderer if I got feeling REALLY ambitious! ^_^) It also meant that it was very difficult to track rendering states across the board, and even harder to insure that one objects render routine wasn't going to interfere with anothers. These problems, along with a few asthetic values and the prompting of some friends helping me with the code pushed me to change the rendering architecture to a more modular and independant design. So, here's our new Render interface (or at least a simplified version):
class CRenderWorld
{
public:
	virtual int		Create( HWND hWnd, CViewParams *params ) = NULL;
	virtual int		Destroy( void ) = NULL;
	
	virtual void	Begin( void ) = NULL;
	virtual void	End( void ) = NULL;
	
	virtual void	Render( CNode *scene ) = NULL;		

	virtual void	Flip( void ) = NULL;

protected:
	
	CCamera*		camera;
	CLight*		        light;
        //etc.....
};

The idea, obviously, is that we create one set of basic functions to interface with the rendering, and then create multiple render classes inherited from this to handle individual render pipelines. For example, I would have a CRenderWorldGL class that is an OpenGL implementation of this. To specify the exact renderer you simply say:
CRenderWorld *render = new CRenderWorldGL;
Pretty basic OOP stuff, yes? Okay, here comes the fun bits. First off: Obviously at this point we're stuck with providing a lot of new methods for each node to pass out the nessicary information to the renderer, which means that when programming new nodes you have to anticipate which information the renderer will need, which may be different from renderer to renderer. To an extent this is unavoidable with the OOP style of programming, but it's made more aggravating here by the needs of the system. Secondly: For every new node that is added, we now are faced with going into the core rendering code and adding new render routines to accomidate the new node type. In my opinion this kills a lot of the usefulness of the scene graph, since a big plus (in my mind) was the ability to add new items in a very simple and modular manner. In the end, I feel like I traded one set of problems for an entirely new set of them. I feel that there must be a better way of handling this, and am wondering how others with more complete rendering systems have handled this. Is there a more elegant way, or am I simply facing the cold hard reality of how it's done and just don't know it yet? (If you'd like any more details on how my code is structured just ask, I simply feel like this post is long enough as it it and don't want to scare anyone away ^_^)

Share this post


Link to post
Share on other sites
This is a massive subject with many pro/con arguments to different implementations.

Probably the best reference I found while constructing my scene graph was here and is what my graph is now based on. It uses a Renderer interface so all low level calls are hidden from the main engine, and then the concept of Global states and local Effects to apply variation to model data.

The Book includes a full source and is very well written it helped me out a lot.

Share this post


Link to post
Share on other sites
Thanks for the suggestion. I've been eyeing that book for a while now, maybe I'll have to finally just go for it!

I realize this is a pretty big topic, and I don't honestly expect a quick concise reply that's going to solve all of my problems. Actually, the post above was more of a thereputic thing for me than anything else. I find that if I take the time to write out my problems in detail it helps me visualise it, and may bring out solutions I hadn't thought about till I put it all down in writing.

My little thinking process goes something like this: I'll usually stew about a problem for a few days, writing up little chunks of code to help me see how it all connects together. If I haven't found a solution by then, I begin writing a Gamedev post asking for help on the subject, trying to be as detailed as possible. Many times once the post has been written I've had this great new idea that I can go and fiddle with, in which case the post is scrapped and I go back to work. If I still am stumped, however, I click post and hope for the best.

A good 90% of the questions I write out never actually make it to the forum ^_^

Anyway, thanks again for the suggestion, and I'm still more than willing to hear any others that people have to offer! In the meantime, I've caught a notion or two that I'm going to play with...

Share this post


Link to post
Share on other sites
Its very true that formulating a question sometimes becomes the answer itself!

After raving about that book, I should mention that it does have a number of things missing for a complete game such as integration of sound networking etc, have a look at the table of contents on Amazon. If you just want to look at the source go to http://www.geometrictools.com/ and pick through it, though its much easier with the book.

Share this post


Link to post
Share on other sites
(Nice book by the way, may have to read it myself)

Anyway, the solution to your problem is indeed to create a render class that can be inherited from for multiple platform/API render systems. Now you may think that there is a plethora of information that you have to store in your objects, but if you code it right, you only have to store the mesh and the textures, plus pointers to material. The example I will use here is my engine. Scene graph nodes in my engine store a mesh, their textures, and a pointer to the material which defines the way in which the mesh wants to look. The render device traverses my scene graph (in my implementation using both quad tree culling(for landscapes) and portal rendering(for indoor scenes)) and requests from each object a pointer to its geometry descriptors (a geom Desc is a collection of a mesh,texture set, and material, of course objects can return that they dont have any geometry information). It then stores the info in geom pots by shader. And at the end all the triangles are dropped to the shaders and everything is rendered. Now the only other thing, and the solution to your problem is this. The material defines a material type and a few parameters, which are then passed to the render device on initialization. The render device returns an ID of the shader that will fullfill the needs of the material. And this is stored in the material for later use by the render device. Thus all calls to libraries are now in a render system using pluggable shaders. This implementation is now extremely flexible as to what is done with data in the shaders, as well as cross platform/API by simply writing a new render class and shader set.

I hope this helps. I'm not sure if it's the best method to implement this functionality, but it seems to do an effective job for me.

- Matlock

Share this post


Link to post
Share on other sites
Thanks Nihilistic!

That actually sound a lot like what I was thinking of doing with the exception that I was thinking about having the renderer (or at least a resource manager within it) store the actual data rather than the nodes themselves, which instead would store a handle to the appropriate dataset. Still haven't actually tried IMPLEMENTING it, but it sounds nice in theory! ^_^ (Doesn't it always?)

Still wide open for suggestions, though!

[Edited by - Toji on July 19, 2005 11:21:14 PM]

Share this post


Link to post
Share on other sites
My approach is slightly different. I use the scenegraph purely to describe the scene data. The nodes do not even contain a Render function. Once the user has finished updating the graph and requests it to be rendered, the traversal function will sort the graph into a list of renderables (i.e. typically the geometries and lights in the scene). This sorting can be important as it allows you to group objects by texture or shader.

In the next step, this sorted list of objects is passed on to a pipeline of processor modules. (Processor is my own terminology; accepting suggestions for a better name). Each processor performs operations on the list of objects. The first processor is typically a culler that marks all objects out of the frustum as culled. The next processor could be an OpenGL (or Direct3D or software) renderer that uses a specific API to render the contents of the list.

In my library there are many more processors. For example: a processor for retrieving some statistics on the list (such as the number of triangles in it) that is useful for debugging, and a range of processors used for creating a distributed scenegraph over a set of PCs (I do parallel rendering).

This way you'll have a really flexible system that allows for very easy configuration (just choose the processors in the pipeline) that can be changed while running.

I'm not stating here that my solution is the best and without flaws; it's simply another option...

Tom

EDIT: One example of a drawback of this approach is that your second problem:
Quote:

For every new node that is added, we now are faced with going into the core rendering code and adding new render routines to accomidate the new node type. In my opinion this kills a lot of the usefulness of the scene graph, since a big plus (in my mind) was the ability to add new items in a very simple and modular manner.

is also true for my approach. However, I believe that if your node requires you to add/modify code in the rendering core, you would have had to implement it's rendering routines anyway, because it apparently has unique rendering behviour (if not, it could have been derived from an existing scene node class). Only the place where this rendering code is inserted is more ackward (far away from the object in the rendering core).

Share this post


Link to post
Share on other sites
Hm... I like that idea very much! The whole "processor" idea ("Render Stage" maybe?) is intruiging. I'm thinking that if you could make the stages pretty generalized in their interfaces you could plug them into the render pipeline on the fly, allowing users to define new stages to go along with new geometry types. That probably isn't nearly as easy in code form as I made it sound (it never is), but it's worth looking into.

Oh, and just for kicks: One of the better scenegraph papers I've found thus far is This One. He sticks very close to the practical details, so anyone who's interested may want to take a look!

Share this post


Link to post
Share on other sites
Quote:
Original post by Toji
Hm... I like that idea very much! The whole "processor" idea ("Render Stage" maybe?) is intruiging. I'm thinking that if you could make the stages pretty generalized in their interfaces you could plug them into the render pipeline on the fly, allowing users to define new stages to go along with new geometry types. That probably isn't nearly as easy in code form as I made it sound (it never is), but it's worth looking into.

Thats how my engine works. I have 'Graph-operators' which all inherit from a base SG_Operator. These can be thown together in any way e.g. UpdateOp->CullOp->RenderOp.

Any given operator conforms to 1 of 4 different classifications of graph-operators. An operator is first classified by the type of dispatch: Static or Dynamic. Static dispatch is achieved through the visitor pattern and fixed at compile time, dynamic dispatch is done through functors and can be changed at run-time. Next an operator is classified by what it dispatched for: Per-nodeType or static(all-nodeTypes). Per-nodeType operators perform a different operation for each type of node (geometry, transform, etc), Static operators perform the same operation for every node (such as culling).

Creating new scene-operators is easy so new operations can be added without changing the node base-class or adding new virtual functions. Adding new nodes is possible but for static dispatch scene-operators new functions must be added (as is the major con of the visitor pattern) but the dynamic dispatch operators have no such problem provided new nodes register their own functors for each operation (unless they're anti-social :)

scene-operators can be removed from the pipeline and even replaced during run-time. The engine maintains one single pipeline of scene-operators but there's no reason why the user can't maintain a pipe-line themselves (for some neat tricks.

The RenderOp doesn't render geometry directly it batches it up, sorts it by material etc (all of which can be customized through functors and virtual functions) and then renders it. Static geometry is pre-processed so after some culling its all ready to go.

Hope that helps :)

[Edited by - dmatter on October 12, 2005 11:42:46 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627757
    • Total Posts
      2978950
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now