HLSL ... how to integrate an HLSL shader with a rendering engine

Started by
8 comments, last by Jack Bartlett 18 years, 3 months ago
Hello. I'm making an FPS-style demo and not sure how to integrate the HLSL with my current rendering engine. I've been using a single technique like the next one to render different objects


technique MainTechnique
{
  pass P0
  {
    // default
    // - for types of geometry which uses only the fixed function pipeline
    //   e.g. font, billboard, icons.
    VertexShader = NULL;
    PixelShader  = NULL;
  }

  pass P1
  {
    // static geometry
    // - world properties are calculated beforehand
    // - uses lightmap textures for static and dynamics lighting
    VertexShader = compile vs_2_0 VS_StaticGeometry();
    PixelShader  = compile ps_2_0 PS_StaticGeometry();
  }

  pass P3
  {
    // skybox
    // - renders the geometry with the texture color only
    VertexShader = compile vs_2_0 VS_SkyBox();
    PixelShader  = compile ps_2_0 PS_SkyBox();
  }

  pass P4
  {
    // for mesh models (entities)
    //  - use per-vertex lighting in the vertex shader
    //  - lights must be set before rendering a model
    PixelShader  = compile ps_2_0 PS_Main();
    VertexShader = compile vs_2_0 VS_Main();
  }
}




and call BeginPass(i) (i = 0 ... 5) according to the types of geometry. I knew this is not a good way of using HLSL, but it has not caused problems. However, when I recently changed some parts of the shader, the index 'i(0 to 4)' handed as an argument of BeginPass(i) ceased to match with that in the technique(P0 to P4), and the proper pass is not set for each type of geometry any longer. Now I strongly feel that I should change my shader, but don't know a good design. So I'd like some advice about HLSL and its use in a relatively large rendering engine. Perhaps I should write as many techniques as the types of geometry. I actually did some tests and it seemed like calls of SetTechnique() affect performance more than BeginPass(). Fearing the decrease of performance, I eventually adopted the above design, but now I'm not sure this assumption was really correct... Thank you.
Advertisement
Something I've read about is to create a scene graph for your objects in the game. Now, what you could do (I haven't tried this yet, so it's just theory), is to create a RenderSceneNode that sets up the proper shaders for specific types of objects, then you would attach objects to child nodes of that render node. Now, if the objects were far away from each other, then a regular SceneGraph wouldn't work, and you'd need to look into Acyclic SceneGraphs, which would allow you to use one RenderSceneNode as a parent for all the objects that make use of the shader(s) attached to the render node. Make sense?

Oh! When you go traverse your SceneGraph for objects to render, you'll likely need to check all of the child scene nodes of the render nodes to see if at least one of those children is within view. If so, then you have the render node do its thing, and then call the child(ren) node(s) that is/are visible. This way, if none of the children nodes need to be rendered to the screen, then you don't have to activate the render node, and it would save you some overhead.

Again, I haven't tried this yet, so I'm theorizing this.
Typical usage of an FX file would be to have a technique for each different type of geometry or effect. For example a "BillboardTechnique" and a "BlinnPhongLightingTechnique" rather than have one "uber technique" with many passes.

For example, I just roughed out my ShadowMapping.fx file to have the following techniques:
  • RenderToShadowMap - take the geometry and just outputs the depth
  • RenderShadowsFromDirectionalLight - projects the results onto the scene, so as to get a shadow/light multiplier from the viewing point
  • RenderShadowsFromPointLight - similar to above, but does a cubemap lookup instead of projecting the texture
  • HorizontalBlur - Post processing to get softer edges
  • VerticalBlur - Second part of the above technique

    Essentially, it's one technique for each major step in the rendering process.

    Is the performance drop you noticed that substantial? Ideally you should program your application correctly and then go about removing the bottlenecks. In my experience (and from what I've read of others) the FX framework is pretty good on the performance front.

    hth
    Jack
  • <hr align="left" width="25%" />
    Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

    Typically, you'd use a technique per kind of rendering, rather than a pass. You can split your passes out into separate techniques.

    There's nothing wrong with using an effect file like that. If it works for you, then great! You should probably try to batch your drawing per technique, so you draw everything with technique A at the same time (sorted near-to-far if it's opaque), then everything with technique B, etc.
    enum Bool { True, False, FileNotFound };
    As far as putting multiple techniques in the same effect file, if you are working on a pretty serious engine, I recommend that you keep only one kind of technique per effect file, and just make multiple fallbacks of that technique (for older hardware).

    For example, using Jacks BlinnPhongLightingTechnique example, you may have 2-4 different BlinnPhongLightingTechnique implementations. You can have a SM 3.0 one, a SM 2.0 one, and a SM 1.* one. Effect will automatically validate the appropriate one for the hardware you are currently using (make sure they are ordered right though, so you use the highest one possible).

    Of course, if you are just doing a demo, then grouping multiple types of techniques into the same effect file should be okay. Just be wary of if you want to add fallbacks to those - it will get pretty confusing and cumbersome to use.
    Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )

    Thank you very much for all the sugggestions.

    Quote:Original post by Dorvo
    Something I've read about is to create a scene graph for your objects in the game. Now, what you could do (I haven't tried this yet, so it's just theory), is to create a RenderSceneNode that sets up the proper shaders for specific types of objects, then you would attach objects to child nodes of that render node. Now, if the objects were far away from each other, then a regular SceneGraph wouldn't work, and you'd need to look into Acyclic SceneGraphs, which would allow you to use one RenderSceneNode as a parent for all the objects that make use of the shader(s) attached to the render node. Make sense?
    ...


    I knew the scene graph only by name, and it looks like I need to check it out this time. I appreciate it if you know any resources / documents especially on Acyclic SceneGraphs. Thanks anyway.


    Quote:Original post by jollyjeffers
    Typical usage of an FX file would be to have a technique for each different type of geometry or effect. For example a "BillboardTechnique" and a "BlinnPhongLightingTechnique" rather than have one "uber technique" with many passes.

    Essentially, it's one technique for each major step in the rendering process.

    Is the performance drop you noticed that substantial? Ideally you should program your application correctly and then go about removing the bottlenecks. In my experience (and from what I've read of others) the FX framework is pretty good on the performance front.

    hth
    Jack


    Performance drop was not that substantial, actually. I think I should definitely use one technique per a type of geometry. Thanks.

    Quote:Original post by hplus0603
    Typically, you'd use a technique per kind of rendering, rather than a pass. You can split your passes out into separate techniques.

    There's nothing wrong with using an effect file like that. If it works for you, then great! You should probably try to batch your drawing per technique, so you draw everything with technique A at the same time (sorted near-to-far if it's opaque), then everything with technique B, etc.



    Maybe I can introduce the batching into my current design, but it may be better to use SceneGraphs, as Dorvo mentioned. It will take some time for me to figure it out, but thanks anyway.
    btw, am I right in thinking that near-to-far sorting is to avoid drawing objects on another for better performance? I've also got some partially transparent objects like somoke traces & particles.

    Quote:Original post by circlesoft
    As far as putting multiple techniques in the same effect file, if you are working on a pretty serious engine, I recommend that you keep only one kind of technique per effect file, and just make multiple fallbacks of that technique (for older hardware).

    For example, using Jacks BlinnPhongLightingTechnique example, you may have 2-4 different BlinnPhongLightingTechnique implementations. You can have a SM 3.0 one, a SM 2.0 one, and a SM 1.* one. Effect will automatically validate the appropriate one for the hardware you are currently using (make sure they are ordered right though, so you use the highest one possible).

    Of course, if you are just doing a demo, then grouping multiple types of techniques into the same effect file should be okay. Just be wary of if you want to add fallbacks to those - it will get pretty confusing and cumbersome to use.



    This looks a quite advanced topic for me. I suppose what you mean is, in the case my shader I posted above, one ".fx" file should contain different versions of, for example, StaticGeometry vertex & pixel shaders. Am I correct? anyway, thank you.











    Just to add to the awesome replies that you have here. With my rendering engine all I do is have a collection of shaders. One for rendering simple/plain effects. Very similar to the tutorial in the DirectX SDK. Then I have a shader/resource manager where I keep a list of shaders. Each renderable entity has a reference to a shader and when I batch the renderable entities, I sort them all by their respective resources and then render them.

    I hope this helps.
    Take care.
    Quote:Original post by Armadon
    Just to add to the awesome replies that you have here. With my rendering engine all I do is have a collection of shaders. One for rendering simple/plain effects. Very similar to the tutorial in the DirectX SDK. Then I have a shader/resource manager where I keep a list of shaders. Each renderable entity has a reference to a shader and when I batch the renderable entities, I sort them all by their respective resources and then render them.

    I hope this helps.
    Take care.


    Does that mean you load multiple effect files in an application? Please correct me if I'm wrong.

    Right now, I'm thinking of using one technique for each type of geometry, but not planning to dvide techniques into multiple files.
    Anyway, thanks.
    Quote:Original post by Jack Bartlett
    Quote:Original post by Armadon
    Just to add to the awesome replies that you have here. With my rendering engine all I do is have a collection of shaders. One for rendering simple/plain effects. Very similar to the tutorial in the DirectX SDK. Then I have a shader/resource manager where I keep a list of shaders. Each renderable entity has a reference to a shader and when I batch the renderable entities, I sort them all by their respective resources and then render them.

    I hope this helps.
    Take care.


    Does that mean you load multiple effect files in an application? Please correct me if I'm wrong.

    Right now, I'm thinking of using one technique for each type of geometry, but not planning to dvide techniques into multiple files.
    Anyway, thanks.

    I can't speak for Armadon's implementation.. but in my current "engine" I've got several seperate fx/vsh/psh files loaded at the same time. I might combine them later on if the effect-switching proves to be a performance problem - but for now I prefer to have them logically grouped.

    For example, I have my ShadowMapping.fx file I mentioned above, my HDRI.fx file for the HDRI post processing and then LightingModels.fx for all the advanced lighting models.

    hth
    Jack

    <hr align="left" width="25%" />
    Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

    Quote:Original post by jollyjeffers
    I can't speak for Armadon's implementation.. but in my current "engine" I've got several seperate fx/vsh/psh files loaded at the same time. I might combine them later on if the effect-switching proves to be a performance problem - but for now I prefer to have them logically grouped.

    For example, I have my ShadowMapping.fx file I mentioned above, my HDRI.fx file for the HDRI post processing and then LightingModels.fx for all the advanced lighting models.

    hth
    Jack


    Thanks for the advice! I will try it after separating the passes into techniques.

    This topic is closed to new replies.

    Advertisement