[D3D10] Why choose to not use the effect framework?

Started by
6 comments, last by SGStino 11 years, 9 months ago
I learned D3D10 using Frank D. Luna's book, which teaches the effect framework. I've seen several posts on the forum stating that one needn't use the effect framework, which helps explain why there are such functions as [font="Lucida Console"]D3D10CompileShader()[/font] and [font="Lucida Console"]ID3D10Device::CreateVertexShader()[/font]. I had wondered why these were provided since the only way I'd seen to do things was through the effect framework.

But what I haven't seen is any explanation of the pros and cons of one approach over the other. Can anyone help me understand the benefits of each approach?

Thanks.
Advertisement
First consider that effect framework is good for little engines or demos, but for big ones and commercial ones not, becauses thoses are always pushing the hardware into the limits and want to save every cycle it can even if it's not always the case in all the pipeline stages, generally we try to do so :)

1) Effect framework is DX only but not HLSL, so in a multiplateform engine it will not be portable (PS3 will not be friendly)
2) Most commercial engines implement material systems with shaders that are combinable (uber shader), that's to say the rendering shader is a combinatory of small pieces of code precompiled, for example you can have a material wich render bricks , and the very same material will have codepath precompiled for reflections, specular, normal mapping, so you can have the same material , but different behavior within it.
Theses make happy artists, because they don't have to worry about choosing between tons of materials, and it make happy your renderstates changes too, since material can have same renderstates but not same shader code. => This is not feasible with effect framework i think.
Probably the biggest problem is that it's binary-only, so if you want to extend it for other platforms or additional features you're SOL.
well the, if you were to not use the effect framework, how would you go and compile the shaders?

by specifing the entry point for each shader (vertex, pixel, geometry) and compile to bytecode?

and will that give the "same" bytecode as compiling the whole effect at once with the effect framework?
The Effects framework is supplied as source with D3D11, but for 10 it's binary only.

The problem is that it's an artificial construct on top of the real shader API. It's a wrapper that's designed to be used in a certain manner. If your code works in the same way, then happy days - you can use it and forget that the real API ever existed. If not then you're constantly fighting.

So the design introduces a certain level of inflexibility into your program. What if you want to give the user the ability to switch sampler states at runtime? What if you want to reuse the same vertex shader but have different pixel shaders? Or vice-versa? What if you want finer-grained control over how constant buffers are used and updated? What if the whole techniques and passes setup is not even remotely like the kind of code you need to write?

All of these can be done with Effects, but in doing them you're likely going to end up building another layer on top - a wrapper around a wrapper. Whereas if you go down to the real API you can get much simpler, cleaner and more maintainable code.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

ye, i figured it was a wrapper, but i'm trying to work out how the wrapper works, without having to read through the native c++ code, at which i'm not an expert.

I think i can complete my idea of what it's doing if i know the difference between seperate shader bytecodes and the shaderbytecode for a full effect.

I mean, if you compile each entry method for the pixel, vertex, geometry, hull and domain shaders seperatly, won't that provide you with unoptimized/duplicate data. If so, what's the effect framework doing to optimize that?

I've written a replacement for the effect framework and now i have 5 different ShaderBytecode objects (PS, VS, GS, HS, DS). And if i have parameters (Textures, ..), i have to set them to each of them individually.

In the effect framework i don't have to, because it's doing something under the hood, which is likely what I'm doing,
But I'm not sure. It might be that due to it being a single shaderbytecode object that those parameters are more optimized and only occur once and are shared between the shaders.
The Effects framework is doing nothing special under the hood. All it can do is just parse your "VertexShader =" or whatever line in your effect, pull out the profile and entry point, and send it through the regular compile step - so even effects compiles everything separately, because that's just the way shaders are compiled. It hides this from you for sure, but that's what it does. There's nothing in there that interacts with the driver on a different level; everything in Effects is code you can write yourself.

Take passes as another example. At the heart of it, a Pass in Effects is nothing more than a struct containing some shader and state objects. You've got an array of these structs, you loop through the array, set your state and make a draw call. And that's it.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

so basicly, i've already completly implemented all that is required. Great.

This topic is closed to new replies.

Advertisement