Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualBoreal

Posted 21 March 2013 - 09:17 PM

I've been thinking about ways to integrate features that traditionally require a flat list into a hierarchical scene graph structure, and improving upon the tree structure a scene graph uses.  My solution was to replace the simple tree with a directed acyclic graph, which allows nodes to have more than one entry point.  I also realized that generalizing the scene graph and allowing for more than one entry point per node would allow constructs for spatial partitioning and integration with a physics engine, among others.

 

The main inspiration for this came when I was trying to find a way for an object to be part of more than one partition, for example, when a character is standing in the doorway of a building.  It's simple when you can add another entry into the node, so both the "outside" and "building" partitions point to the character, and it can be rendered if either partition is visible.

 

Generalizing the graph would allow for more flexibility.  Typically, every node in a scene graph is associated with a transformation, but this is clunky when physics is put into the mix.  Say a character is holding a ball.  The "ball" node would be a child of the "hand bone" node.  When the character throws the ball, the "ball" node must be moved and its transform changed to keep the same world position.  It gets even worse when the character has to pick the ball up again.  My solution is to implement transformation nodes as physics constraints.  A typical transformation would be a weld joint.  Bones in skeletons could be implemented as ball and socket joints.  This works well with my engine where animations are all procedural and depend on the physics engine.

 

Why stop there?  The whole high-level rendering process could be implemented in the graph.  You could have a shader node, a texture node, and then a mesh node.  Because each node can have more than one entry point, you could create multiple passes by forking into two material groups and then joining at a mesh.  Instancing could be done automatically for rendering paths identical except for transformation.  Nodes could also be general-purpose programming constructs, like a condition (if condition, choose this path, else, choose another path), a random selector, and so on, like nodes in a behaviour tree.

 

What are your thoughts on this?  I want to get some feedback before I hunker down and code this.


#1Boreal

Posted 21 March 2013 - 07:45 PM

I've been thinking about ways to integrate features that traditionally require a flat list into a hierarchical scene graph structure, and improving upon the tree structure a scene graph uses.  My solution was to replace the simple tree with a directed acyclic graph, which allows nodes to have more than one entry point.  I also realized that generalizing the scene graph and allowing for more than one entry point per node would allow constructs for spatial partitioning and integration with a physics engine, among others.

 

The main inspiration for this came when I was trying to find a way for an object to be part of more than one partition, for example, when a character is standing in the doorway of a building.  It's simple when you can add another entry into the node, so both the "outside" and "building" partitions point to the character, and it can be rendered if either partition is visible.

 

Generalizing the graph would allow for more flexibility.  Typically, every node in a scene graph is associated with a transformation, but this is clunky when physics is put into the mix.  Say a character is holding a ball.  The "ball" node would be a child of the "hand bone" node.  When the character throws the ball, the "ball" node must be moved and its transform changed to keep the same world position.  It gets even worse when the character has to pick the ball up again.  My solution is to implement transformation nodes as physics constraints.  A typical transformation would be a weld joint.  Bones in skeletons could be implemented as ball and socket joints.  This works well with my engine where animations are all procedural and depend on the physics engine.

 

Why stop there?  The whole high-level rendering process could be implemented in the graph.  You could have a shader node, a texture node, and then a mesh node.  Because each node can have more than one entry point, you could create multiple passes by forking into two material groups and then joining at a mesh.  Instancing could be done automatically for rendering paths identical except for transformation.  Nodes could also be general-purpose programming constructs, like a condition (if condition, choose this path, else, choose another path), a random selector, and so on, like nodes in a behaviour tree.

 

What are your thoughts on this?  I want to get some feedback before I hunker down and code this.


PARTNERS