Managing values in effect files

Started by
6 comments, last by skillfreak 18 years, 10 months ago
Hello, I'm currently developing an engine, which is based on Direct3D and uses effect files to define the surface of each triangle which gets rendered. I thought of a system like that: There is a class named Effect, which implements methods like getVector/getInteger etc. The basic idea is, that you cannot change the state of Effect objects. If you want to do so, you have to instantiate a new class named EffectVariation, which is derived from Effect, passing an Effect object to the constructor. Here you should be able to override some of the default values an effect has. The EffectVariation class provides the renderer with all the necessary information. But I've a problem implementing this idea: How should I cope with the different values which can be assigned to the effect? I thought of keeping an array with all values. If a specific EffectVariation gets rendered, it sets the variables of its superclass (Effect), then it sets its own variables. (so there would exist two pools of variables) But I'm not convinced, that this will work effectively. Another approach I thought of: I may use the CloneEffect method of the ID3DXEffect interface. An EffectVariation would then duplicate the base effect and operate on this new effect. Do these clones share common variables? (e.g. pixel shaders, vertex shaders...) If this is not the case, I will probably discard this idea. (due to efficiency: setting a new vertex shader (including all parameters) everytime a new mesh gets rendered, is not acceptable) If you have other ideas or proposals, please let me know. :) [Edited by - SlimTimmy on June 2, 2005 11:55:11 PM]
Advertisement
'The basic idea is, that you cannot change the state of Effect objects'
Why do this? Isn't that useful?
Quote:Original post by skillfreak
'The basic idea is, that you cannot change the state of Effect objects'
Why do this? Isn't that useful?


Ok, I'll give you an example:
Suppose, there are two meshes in the scene graph. They both use the same effect, but the textures are different.
In the rendering process the scene graph would get the render operation of each mesh. So, for instance, mesh A builds a render operation, filling in its modified effect (the texture has been set). The same happens with mesh B. (this is the critical point: If I used the plain effect interfaces, the values would be overriden: Both meshes would use just *one* texture)
Finally the scene graph sorts the meshes by there effects and renders them.
Did I explained it understandably?
Can anybody tell me which approach I should use?
Why can't the mesh just set its own correct texture?

If i've got this straight:

You have two meshes. RobotA and RobotB. They have an awesome effect for their rendering, but have different textures - say, to tell them apart. =)
Scene graph get's its stuff in order -> RobotA n RobotB are set to be rendered. So the graph takes A/B, sorts them for more efficent rendering calls back to them to let them render.
This is the point I don't quite follow you. Couldn't you just set the texture here?

Mesh::Render()
{
SetTexture( mTexture );
mEffect->Start();
RenderMe();
mEffect->End();
}
The mesh has no own render method. I'll show you some pseudo code:
Mesh::getRenderOperation(){  m_Effect.set(Texture);  RenderOperation Op(m_Effect, m_VertexBuffer, ...);}// The Scenegraph would do something like that:Scenegraph::render(){  foreach(Meshes as Mesh)  {    m_List.add(Mesh.getRenderOperation());    // here you could also distinguish between transparent and opaque objects  }  m_List.sortByEffect();    foreach(m_List as RenderOperation)  {    m_Renderer.performOperation(ListItem);  }}


You could also sort by the different vertex buffers. This approach makes it possible to avoid unnecessary switches.

[Edited by - SlimTimmy on June 4, 2005 10:46:25 AM]
Please give me some advice. (or criticize my rendering technique ;) )
I do not have a long running experience with effects and their best integration into software; however, it seems to me that tying the needed textures directly to the effect operation is probably not the best design. imo.

If I had a cup with ^sf imprinted on it ( being the texture ) and I wanted to be able to distort it with a given effect, I would render the cup with the method illustrated in my previous post. To me, the effect make more sense in being able to tie it to various objects in the scene without carrying the textures with it.

In this contrast, how are you looking at the use of effects/textures within the engine?

And where would this call lead off to?:
m_Renderer.performOperation(ListItem);

This topic is closed to new replies.

Advertisement