effect based render engine

Started by
7 comments, last by xelanoimis 18 years, 1 month ago
I see a lot of you people use the DX effects in your render. So, I was curious to know how is it best to support them, from the render engine's design point of view: 1. to use an effect instance for each model (or mesh) so you can any have specific parameters (like textures and colors) - this will result in many effect files loaded at the same time. 2. to create a "material" class (associated with the model) that provides only model specific data (like textures and colors) to a specified effect - this way the shader will be viewed as a rendering pipeline, and there will be only a small number of effect files loaded at the same time. The second version seems to be faster, but harder to implement, and may have limitations on parameters types (like only texture1, texture2, diffuse, specular etc) Considering there are not an "unlimited" number of models in the game, what version do you use or suggest? If you use materials, do you have a fixed number of parameters associated with specific semantics in the effect? Or did you managed to come up with a more general approch ?
Advertisement
Definitely some sort of material list/pool. That way you can have constructs such as:

EffectA->SetTechnique( "SuperCoolRendering" );EffectA->Begin( ... );    for( each pass )        EffectA->BeginPass( ... );            // Configure Effect for Object 1            RenderObject1( );            // Configure Effect for Object 1            RenderObject2( );            // Configure Effect for Object 1            RenderObject3( );        EffectA->EndPass( ... );EffectA->End( ... );


Instead of:

EffectA->SetTechnique( "SuperCoolRendering" );EffectA->Begin( ... );    // Configure Effect for Object 1    for( each pass )        EffectA->BeginPass( ... );            RenderObject1( );        EffectA->EndPass( ... );EffectA->End( ... );EffectB->SetTechnique( "SuperCoolRendering" );EffectB->Begin( ... );    // Configure Effect for Object 2    for( each pass )        EffectB->BeginPass( ... );            RenderObject2( );        EffectB->EndPass( ... );EffectB->End( ... );EffectC->SetTechnique( "SuperCoolRendering" );EffectC->Begin( ... );    // Configure Effect for Object 3    for( each pass )        EffectC->BeginPass( ... );            RenderObject3( );        EffectC->EndPass( ... );EffectC->End( ... );


Yes, the second listing isn't necessarily "real world" but it's effectively what you'd come out with if you had a seperate effect for each object.

Initially the second approach might seem easier to implement, but given experience I'd personally say that the first one is much easier to implement/maintain once you've jumped the initial hurdle.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

I use approach #2, but without bounding it to specific data. Basically, I have 2 ways to link constant data to the actual shader:

(1) Static constant links: the actual data is supplied by the model at load-time. An example would be diffuse color or textures.

(2) Dynamic constant links: the data is updated each frame through function pointers. For example, the view matrix goes through Camera::GetMatView().

At the lowest level, the material only stores void* pointers, so it doesn't differentiate between any different types of constants (ie textures, vectors, matrices, ect).

The whole system can get very very complicated (what I outlined above is just a really broad view of it). Before going crazy with it, you should first outline what your needs are. I needed to be able to load any shader at runtime and dynamically bind it to any available data supplied by the application (dynamic or static), so I went with a much more comprehensive system.

I have an additional type of constant for scripting, but that is kind of an outside issue. That lets you embed a script as an annotation in the effect itself. I don't really use it, but it is nice to have for artists who need to come up with some specific constant that isn't coded into the application itself.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Its a hard question in fact. No easy answer and you maust make your own choice.

a. An effect instance for each model.
This sounds easy but you will soon notice that you need many effects for each model. I.E. A Car uses a shader for the paint job, another for the glass, other for chrome parts and other for tires (and if you are picky... other for mirrors, other for seats, other for the board).
You will slowly notice that each model will be transformed in subsets and you must control each rendering subset Then your scene rendering may go like this:
for each object in scene
object->Render();

In this case, inside each object you will do domething like:
for each subset
subset->Render();

And each subset will be rendered like:
effect->Begin();
for each effect pass
mesh->render();
effect->End();

This looks nice but will need a lot of effect changes... that means a lot of state changes, a lot of texture changes and a lot of parameter changes (matrices, lights, time, sin/cos values, etc) and that slow things down. Anyway this kind of approach is ok when you consider using D3D10 and/or support SAS 1.0.

b. Creating a material class is a more 'classical' approach. In this case you load your effects and create a rendering list for each one. Each list is a model subset collection that must be rendered using the same material. So your rendering code is done in two passes:
First you send your object to the appropiate render list:
for each object in scene
device->AddObjectToRenderLists(object)

the device object is your rendering device, it holds the lists of subsets to be rendered. The AddObject.... will look like:
for each subset
Add subset into the subset->effectID list

This the effectID identifies the material. The ID is used in order to direct the subset into that list (GOLD list, BUMP list, GLASS list, etc). Its suggested to sort the objects into each list by color texture or other appropiate parameter you consider appropiate (like distance to camera).

So, each object is now in its appropiate render list, you can do:
device->RenderList(GOLD_EFFECT);
device->RenderList(BUMP_EFFECT);
device->RenderList(GLASS_EFFECT);

And so on. You define the order of the rendering so shaders with alpha get rendered before other shaders (but of course if your sort by distance that wouldn't be a problem).

As objects get packed by effect, the state, texture and variable changes can be reduced so your rendering gets faster. This approach is more compatible with DXSAS 0.8.

This is just a basic presentation of the problem. Of course you will have to decide to go for implementation (a) which I see more appropiate for D3D10 and 3d applications (non games) or (b) which is better for DX9 architecture and more appropiate for games but is getting older.

But let me give you an advice... forget about programming your engine and get any existing engine (suggested Torque and the TSE). :)

Luck!
Guimo


Quote:Original post by Guimo
This the effectID identifies the material. The ID is used in order to direct the subset into that list (GOLD list, BUMP list, GLASS list, etc). Its suggested to sort the objects into each list by color texture or other appropiate parameter you consider appropiate (like distance to camera).

A very easy (and performance-friendly) way to do this is to just use a simple tree. I enumerate this more in a kbase article here.

Quote:But let me give you an advice... forget about programming your engine and get any existing engine (suggested Torque and the TSE). :)

Making your own can be a very worthwhile experience. You really do learn a lot, much more than you would if you just used Torque or whatever. If you really want to make a full game, it may be better to license. However, if you just are doing it from a graphics perspective, it is very helpful to roll your own.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Quote:Original post by Guimo
This the effectID identifies the material. The ID is used in order to direct the subset into that list (GOLD list, BUMP list, GLASS list, etc). Its suggested to sort the objects into each list by color texture or other appropiate parameter you consider appropiate (like distance to camera).

A very easy (and performance-friendly) way to do this is to just use a simple tree. I enumerate this more in a kbase article here.

Quote:But let me give you an advice... forget about programming your engine and get any existing engine (suggested Torque and the TSE). :)

Making your own can be a very worthwhile experience. You really do learn a lot, much more than you would if you just used Torque or whatever. If you really want to make a full game, it may be better to license. However, if you just are doing it from a graphics perspective, it is very helpful to roll your own.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
I've implemented method 2 in my engine, and it wasn't too terribly difficult. I have two classes: Effect, which holds the actual effect interface and a table of per-effect variables, and EffectInstance, of which each instance is a child of an Effect class, and has per-instance variables. So for example, I load one Effect. It might have a per-effect variable called LightDirection. I set that. That way, the light direction will be the same for every instance of the effect.

I then create two separate instances from that Effect, and apply them to two different objects. The instances have a per-instance variable called Diffuse, which I set to different values for each instance.

Then, when I render, I sort by effect and then by instance, so what happens is:

Effect->Begin()
Instance1->ApplyVariables()
Object1->Draw()
Instance2->ApplyVariables()
Object2->Draw()
Effect->End()

The effect is only begun / ended once, and each instance just applies its per-instance variables to its parent effect.

There are no limitations, since each effect and instance holds a parameter table which is taken straight from the effect file. So, if I want to set the diffuse of an instance, I do

Instance1->SetFloatVector("Diffuse", 1, 0, 0, 1);

If you keep a standard naming convention for parameters for all your effects, you can usually use some of the same code for all effects.

Of course, if I really felt like it, I could derive a class from the EffectInstance class for a certain effect, so that I could get errors from the compiler instead of at run-time in case I mistyped a name. But that's just a personal preference.
_______________________________________________________________________Hoo-rah.
Thank you all for reply.
Drakex, your answer is the closest to what I was asking.
I am not concern about optimizing for extra speed, because I will not have many objects per frame/scene.
I am more interested in how to organize the material class so it can fit most of my effect needs.

I do not want to write in code directly
Instance1->SetFloatVector("Diffuse", 1, 0, 0, 1);

I was thinking of some material script that look like:
Prop: "diffuse", TYPE_FLOAT, "1,0,0,1";
Prop: "specular_power", TYPE_FLOAT, "0.7";
Prop: "texture1", TYPE_TEXTURE, "Textures\t1.tga"
where "diffuse", "specular_power", etc are sematics in the effect, that the material will get handlers to and set specific data before draw.
The material will be packed with the geometry in the model.

Or, the material can contain only raw data as floats, provided to the effect as 'current material data' before draw. The effect will read data based on an indexed semantic (like "MATERIAL_00_FLOAT" or "MATERIAL_14_VECTOR").

Anyway, from the game logic (maybe from script) I have to be able to do something like: SetMaterialVector( entity.model.material, propidx, myvector );


Something like that... but it's still in a concept phase.
Thanks again,
and if anyone has more suggestions, they are welcome
I am also looking for compatibility in my choices.

I want the interface to easily be used with OpenGL/CG (CGFX) insteed of Direct3D (Effect), or any other modern graphic language that might be invented for this class of hardware.
The interface should not include special functions specific only to one API.
For example Effect parameters in DX support the generic SetValue( handler, pointer, size ), but I think that CG have only typed Set functions ( like SetValueI, for integers). So the material (or the effect) should know what kind of data is it setting into the parameter, to call the propper Set function.

It have to be pretty generic and adaptable.

This topic is closed to new replies.

Advertisement