Jump to content
  • Advertisement
Sign in to follow this  

Omnipresent DAG scene graph - thoughts

This topic is 2011 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been thinking about ways to integrate features that traditionally require a flat list into a hierarchical scene graph structure, and improving upon the tree structure a scene graph uses.  My solution was to replace the simple tree with a directed acyclic graph, which allows nodes to have more than one entry point.  I also realized that generalizing the scene graph and allowing for more than one entry point per node would allow constructs for spatial partitioning and integration with a physics engine, among others.


The main inspiration for this came when I was trying to find a way for an object to be part of more than one partition, for example, when a character is standing in the doorway of a building.  It's simple when you can add another entry into the node, so both the "outside" and "building" partitions point to the character, and it can be rendered if either partition is visible.


Generalizing the graph would allow for more flexibility.  Typically, every node in a scene graph is associated with a transformation, but this is clunky when physics is put into the mix.  Say a character is holding a ball.  The "ball" node would be a child of the "hand bone" node.  When the character throws the ball, the "ball" node must be moved and its transform changed to keep the same world position.  It gets even worse when the character has to pick the ball up again.  My solution is to implement transformation nodes as physics constraints.  A typical transformation would be a weld joint.  Bones in skeletons could be implemented as ball and socket joints.  This works well with my engine where animations are all procedural and depend on the physics engine.


Why stop there?  The whole high-level rendering process could be implemented in the graph.  You could have a shader node, a texture node, and then a mesh node.  Because each node can have more than one entry point, you could create multiple passes by forking into two material groups and then joining at a mesh.  Instancing could be done automatically for rendering paths identical except for transformation.  Nodes could also be general-purpose programming constructs, like a condition (if condition, choose this path, else, choose another path), a random selector, and so on, like nodes in a behaviour tree.


What are your thoughts on this?  I want to get some feedback before I hunker down and code this.

Share this post

Link to post
Share on other sites

Why do you want one graph to rule them all?


Spatial relationships aren't hierarchical or directed. The relationships between the rooms of an office naturally form a cyclic graph.

Transformation hierarchies naturally form a tree structure, which doesn't require any knowledge or connection to a spatial structure.

Optimal rendering order cannot be determined by traversing a scene graph -- such structures usually have to be linearized and sorted to determine rendering order.


Often different middleware components will contain their own internal representation of your scene. E.g. a physics engine like Bullet or PhysX will contain a "scene graph" and transformation hierarchy internally, which you cannot access. This isn't a problem; there's no need for your visual representation of the scene to be tightly coupled with the physical representation. All you need is a way for the updated physics state (the transforms) to be reflected in the visual structure.

The physical representation does not need to perform tasks like view-frustum culling, or potential-visible set determination, or material sorting, so it's structure will not be optimized for these tasks -- the visual representation of the scene will be organized in a way that's conducive for these tasks though.


The optimal data structure depends entirely on the tasks that must be performed, and it's likely that each task will work optimally with a different data structure. So, forcing tight coupling of all data into an uber-structure is very counter productive.

Edited by Hodgman

Share this post

Link to post
Share on other sites
I used to use scene graphs but I've had to abandon the whole concept as a bad idea.

The gain in speed when you instead design data structures around the operations that need to happen is immense. It also leads to fewer dependencies and simpler code.

If the physics engine is handling relative transformations, why have transform nodes at all? You're just duplicating information and making it twice as complex as it needs to be.

You can represent rendering as a tree, but why? The process of rendering is the equivalent of flattening the tree. Why not flatten it in advance and save all that work?

Scene graphs have their place in tools, but they shouldn't make it to the run-time.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!