Setting states in code versus effect/hlsl

Started by
4 comments, last by cozzie 10 years, 8 months ago

Hi,

I 'accidentally' set my d3d debug level to the highest level and noticed quite some redundant state setting.

Now I might have thought of a fairly simple improvement on this:

My render order (que):

- first everything opaque

- loop through all effects

- loop through all materials using that affect

- loop through all meshes using the material

- loop through all mesh instances of that mesh

- do the same all over again for blended mesh instances

Improvements?

- in my opaque technische I set AlphaBlendEnable and ZWriteEnable

- in my blended technique I set AlphaBlendEnable, ZWriteEnable, SrcBlend, DestBlend, AlphaOp and AlphaArg

Since each frame I also begin those techniques once, I don't think there's much to gain doing these with SetRenderSate in my code.

Although I'm seeing redundant state settings I can't explain yet...

BUT, for sure my texturesampler is the same for ALL materials, techniques and everything.

Wouldn't it then be better to set Texture, MinFilter, MagFilter, MipFilter and MaxAnisotropy, just once for each frame, using SetRenderState in my code? (not effect/hlsl)

Just curious what you think about this.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement

I always set states in code. Example: I might have a mesh shader that sometimes I want with alpha blending enabled, sometimes disabled. Sometimes it's alpha test. Sometimes depth writing. Sometimes I might want to change the cull direction. Otherwise the shader is identical in all cases. Why go with setting them in Effects and having a combinatorial explosion as a result? Doesn't make sense.

If you're worried about redundant states coming from Effects, then just attach an ID3DXEffectStateManager to your Effect and filter them in that before they even go to D3D.

On the other hand, the option of not using Effects at all remains open and is what I personally favour. Effects is just a bunch of helper classes around the raw API, designed to make things easier for beginners. For real control you can't beat using the raw API directly.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Thanks, I'll have to draw out/ profile which states I actually need to set compared to what I set.

For now I'm staying with d3dx effect framework, because I have a lot of other nice new things todo in the engine first :)

At the moment I already use 'd3dx_nonotsavestate' when executing an effect.

This way I keep control of the states myself.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

You might also be interested in adding a small layer between your code and the API calls to allow you to monitor the way that the states are being modified. Really what you are after is to end up with the correct state being set prior to a draw call, and often your C++ engine design may allow the same state to be set multiple times in between draw calls.

In Hieroglyph 3 I use a template to create state monitors (see here for details: State Monitoring). These basically collect the state calls from any objects that have access to set a state, and then remembers which states are different than the current API state. When it is time to draw, only the absolutely necessary state changes are issued. This is more efficient, since your C++ code is able to quickly check values instead of making a driver call (which can involve switching from user mode to kernel mode...).

Of course, as mhagain mentioned, this needs to have direct access to the API (i.e. no effects) but that is also my preference too. You will have a better understanding of how things work if you move one step deeper, so I would encourage you to do so - even if you are just exploring :)

Hi.

I've been thinking about this and I have the following plan:

- create a d3d state machine class (and have an object of it within my main d3d rendering class)

- this class will have members keeping track of all current states

(culling mode, blending modes, filtering, etc etc.)

- the class gets a pointer to the d3d device and it's member functions are the only place in my engine where states will be set on the device

- the member functions will be to set states, without having to think about the current states

- because the state machine object keeps track of all current states, it will decide (on the CPU) to set/ change the state or not

(thus saving redundant setting of states and optimizing balance between GPU/CPU maybe as a bonus)

This approach actually sounds fairly simple and straight forward.

Although there is something that makes me doubt. If I do this, I think I'll decrease flexibility in the usage of my effects/ shaders. If for example my effects in the future are exported from 3d modelling applications or so, they might contain needed states. I know this might sounds 'what if'/ pre-mature optimization and far away, but probably good to ask these questions now and not after implementation.

Any input is appreciated.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Anyone?

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement