Scenegraph and Nodes Data

Started by
6 comments, last by jffortin 17 years, 4 months ago
I have read the various other post on scenegraph by Yann. I understood the general idea of his explanation of having everything abstract to a common base class to allow extension. For example, CMesh, CEffect, CLight are all abstracted from the CObject base class. However, when building the scenegraph and tranversing it for the CEffect, how is it possible to know how to properly render using the effect. Since different effects take different parameter and are render differently. For example, a ShadowEffect would draw all the geometry without any texturing etc unlike other effects which require different handling. And other effects might require render to texture while other don't. I can't call CEffect->Process() since a effect is usless without the other nodes/objects like meshes. Would you please explain in greater detail in how you managed your scene? Yann? Thanks, I hope my post is quite clear?
Advertisement
Quote:Original post by littlekid
For example, a ShadowEffect would draw all the geometry without any texturing etc unlike other effects which require different handling. And other effects might require render to texture while other don't. I can't call CEffect->Process() since a effect is usless without the other nodes/objects like meshes.


The first thing I tried was to put those nodes in a scene graph. To render a scene I was creating a render list with all the objects in the graph. When a Technique (it's how I called those multi passes effect objects) needed something to be rendered before being called it would add it before the rendering list.

The render list objects were render commands that were executed on the device so I could change the render targets and other states (everything was state sorted by render target then shaders, etc).

Quote:Original post by littlekid
Would you please explain in greater detail in how you managed your scene?


I'm usually refactoring my code when something feels wrong or weird. At that time I saw a post around the same issues and decided to change things in my code. I can't remember who mentioned it but he was talking about render views. Each view would refer to the texture that need to be rendered before the scene.

So the scene graph became a scene database. Each and every effect files that need a rendered texture as input would refer to a render view. So, basically the renderer will maintain a tree representing the different views and their dependencies (what need to be rendered before the other). Once rendered a view can be used as a texture in the parent view. And I have some specialized views (and the user can add its own) for the different uses they can have (post-processing, rendering, shadows, etc).

I preferred this way because the scene graph is cleaner and I can remove a lot of weird code needed to maintain that multi-pass mess since they are manages by the render views themselves.

The problem I have right now is I'm not sure how this will evolve during the rendering (adding/removing views in runtime) because I want it to reflect what is in the scene. The views need to be updated at runtime when the scene changes but I'm not sure how to do this efficiently (I will need work and profiling [wink]).

Anyway this is just a really quick explanation of how I'm doing things right now. If you are still questioning yourself ask questions and I'll do my best to clarify things.


JFF
It sounds to me (correct me if I've misunderstood) as if Effects apply to the whole scene. So it doesn't make sense to put them in the scene graph; instead you would apply effects to the rendering queue before passing things to the graphics card.
Quote:Original post by Bob Janova
It sounds to me (correct me if I've misunderstood) as if Effects apply to the whole scene. So it doesn't make sense to put them in the scene graph; instead you would apply effects to the rendering queue before passing things to the graphics card.


Not all effects belongs to the whole scene. But for the effects who does it might be difficult to choose where to put them in the scene graph. That's why I chose to make a separate structure to manage them.

The render views like I see them are like render targets on which something is rendered (The renderer itself manages the real "hardware" render targets.). Each view render what it was designed for.

For example, I got render views to manage the post-processing effects (I think its the simplest thing I could think of in the system) it takes as input a child view which is the scene on witch to add the post-processing effect. What it does is render a quad with the input texture on it and apply the desired effects. So the output is the scene with the post-processing applied.

If my post-processing effects were done in the scene graph I would have needed to put them in the top of the scene graph so it is applied to everything under it. And if I created my graph as a tree I would have multiple copies of the scene (for each post-processing effects) that I would have to maintain to achieve the same effect. Then there is also the overhead caused by the initialization and creation of the render queue because managing the different effects this way was more time consuming. This is the dumb algorithm [wink], I could think of many ways to optimize this process but the point here is that it's more complicated to make/manage.

Now with the render views the scene graph only has information about the different objects and their attributes (shaders, shaders constants, textures, transforms, etc). When a render view needs to access the scene it can via the scene graph (or database) and it also can access the render target from its children as input to render. That way there's no "patching" of the render queue every time some thing needs to be rendered before something else.


JFF
how about?

        CEffect           |  =====================       |   CMeshNode       |   CMeshes//Abstract:CGraph::Render()  for each child node        CGraph::Render( ParentGraph(this), ... )//Concrete:CEffect::Render()  for each child node        CMesh::Render( CEffect, ... ) // applies effect on mesh object
Quote:Original post by vetroXL
how about?
[...]


Directly applying effects like that (if I understood correctly what you meant) is possible but not for all effects. The problem is that some techniques (or effects if you prefer) will need input from a texture. Those techniques needs the texture rendered before they are executed what I'm suggesting here is a way to do that. You still render the effect like in your pseudo code but you have something to manage the different textures the effects will need and also manage their priority so you don't get into problems.

The problem we were discussing is exactly that and I talked about my solutions. I gave as an example post-processing because it didn't need access to anything else that the rendered scene (so it was easier).

When a technique needs input from a texture you can't just stop rendering your scene, render the texture it needed and then resume the rendering. Well, you could probably but it will probably be harder to implement and maintain. Like I said the problem is that the texture needs to be rendered before so I just suggested two ways to manage that priority.

JFF
Hi,

That can be solved with abstraction ...
// one overlaodCMesh::Render( device,... )// another overlaodCMesh::Render( CEffect*, device, ... ){   for each texture in mesh     Ceffect::SetTexture   ....}// some other objectCBillBoard::Render( CEffect*, device,...{   CEffect::SetTexture   ...}


of course if the effect doesn't care about textures then the CEffect::SetTexture impl can be empty
Quote:Original post by vetroXL
of course if the effect doesn't care about textures then the CEffect::SetTexture impl can be empty


You seem to be using a really straight forward way to render your scene. But I don't get how you would render the textures that some Effect needs.

For example, some shadow algorithms will require that a depth buffer is rendered from the point of view of the light in order to calculate the shadowing. That buffer, a texture for instance, needs to be rendered before rendering the shadows in the scene.

What I see in your pseudo-code is a way nodes in the scene graph can be rendered, in other words your showing a way to go through the graph and render the scene. But how would you manage it when one of the effects needs a depth buffer (or anything else) to be rendered dynamically before rendering it (think shadows, post-processing, etc)?

These effects still need to be rendered the way you mentioned but they also need input from other dynamically rendered textures.


JFF

This topic is closed to new replies.

Advertisement