Whats the shader data a Material class holds?

Started by
21 comments, last by Weton 11 years, 1 month ago
Ive been cycling between crister ericson, hodgman and spiro posts, trying to wrap my head on the subject "rendering system".
Im stuck right now on what exactly a material class does.
A material references a shader "resource", with means a combination of shaders programs (vs, ps, etc.)
This shader resources a material holds, is it a single permutation of a shader, or a material can access all of the permutation of the shader it references?
Say I have a cube with a material, if my cube changes environment, so that the number of lights change, do I need to just use another permutation on the material(material holds a permutation info), the shader itself updates its permuation info(so all material with this shader now will use the new permutation), do I need to change the cube material (another material with the right permutation)? or do I need to update the shader on the material(so the material got a new shader)?
Also, when is that a material finds out what cbuffers slots to use? This is shader stuff, say you know what cbuffers a shader use cause youre using reflection at asset loading time, materials comes from models right? Not from HLSL files..I fail to see when things get linked up (shader permutations compilation, models loading, environment lighting info being updated to shader cbuffers..)
Advertisement

Cannot comment on the cbuffer slot issue, but I'd recommend that the material holds a reference to a "base" shader, from which permutations can be built/requested, and the rendering system chooses, during runtime, the actual shader permutation based on the object's lighting environment.

This is how I do it (this is based on Hodgman posts):

At asset build time I assign each material a set of base "shader resources" (one for each pass the material should be drawn in).

The material chooses the right permutation of each "shader resource" based on what it needs and then, using the correct shader permutation, it finds out which cbuffers/textures slots to use using shader reflection.

A material will contain a bitset where each bit specifies a shader feature that the material needs and the Renderer will use that bitset to choose the right shader permutation every time the material is used. So you can change this list to change the material appearance by changing the bitset.

The number of lights or other external factors shouldn't modify the material, after all they're external factors.

So in my engine I have a class called Actor (basically a Model instance), that holds a pointer to a Model and vectors like position, rotation, etc.

The Actor also has a bitset, like the Materials, that will dynamically be updated based on the Actor/lights positions, so the Renderer will get the bitsets from Material/Actor/Geometry and other classes in order to choose the right shader permutation.

In my renderer, I don't even have a material class. A material is just a bunch of data (cbuffers, textures, shaders) and a bunch of commands that bind that data to the pipeline (PSSetConstantBuffers, etc).
I have classes for resources like cbuffers/textures/etc, and I also have a class called StateGroup, which can hold commands to set rendering states (which includes binding resources).
I can use StateGroup to represent a material, as well as other things, e.g.


StateGroup objectStates;   // binds cbuffer with world matrix
StateGroup materialStates; // binds shader, sets blend mode, binds textures, binds material cbuffer
StateGroup lightingStates; // binds cbuffer with light positions
StateGroup* states[3] = { &objectStates, &materialStates, &lightingStates };
Draw( mesh, states, 3 );

This shader resources a material holds, is it a single permutation of a shader, or a material can access all of the permutation of the shader it references?

I always reference a particular "technique", which internally may have many different permutations that can be chosen by the renderer right before each draw call.

Say I have a cube with a material, if my cube changes environment, so that the number of lights change, do I need to just use another permutation on the material(material holds a permutation info), the shader itself updates its permuation info(so all material with this shader now will use the new permutation), do I need to change the cube material (another material with the right permutation)? or do I need to update the shader on the material(so the material got a new shader)?

If the lighting environment has changed, I wouldn't make any changes to the material or the shader. The material references a particular shader, and that shader contains techniques for different lighting enironments.
When drawing the cube, with this material/shader/lighting environment, the renderer can select the appropriate premutation at the last moment, when it has all this information available to it.

Also, when is that a material finds out what cbuffers slots to use? This is shader stuff, say you know what cbuffers a shader use cause youre using reflection at asset loading time, materials comes from models right? Not from HLSL files..I fail to see when things get linked up (shader permutations compilation, models loading, environment lighting info being updated to shader cbuffers..)

I do all of this in the tools, during "data compilation" time.
First I compile the shaders, which tells me their cbuffer layouts (which variables are in which cbuffer structures, and which slots/stagse each structure should be bound to).
Then I parse the artists material descriptions (which for me, are in the COLLADA files), and use the above structure information to create byte-arrays of cbuffer data.
Then I create binding commands, to bind these structures to the appropriate slots/stages (e.g. bind cbuffer structure #1 to pixel shader slot #3).
Then I save these cbuffers and binding commands into a "material file", which contains links to other resource files (textures, shaders, etc) and contains StateGroups and cbuffers to be used as "materials".

In my renderer, the base material class is called Material and it contains a set of MaterialTechnique objects that describe particular ways that an object can be rendered (i.e. forward rendering, depth-only, deferred g-buffer pass, etc.). Each MaterialTechnique contains an ordered list of ShaderPass objects that totally define all inputs to the shader for any number of passes.

The ShaderPass is probably the most important class in the engine. It contains a handle to a shader program resource and provides methods to make bindings between shader input variables in the shader program and shader input values (vectors, matrices, constant buffers, vertex buffers, textures). This system allows the ShaderPass to persistently verify that all shader inputs match the corresponding variable types in the shader source code. The ShaderPass even contains all vertex data - it has bindings from shader vertex input variables to vertex buffer objects. This seemed like a rather bizarre design decision but it makes sense if you think about it (vertex buffer data layout is dependent on the shader).

The result of storing vertex data in the ShaderPass class is that classes like StaticMesh end up just being a pointer to a Material and an index buffer.

At render time, a ShaderPass and an index buffer is given to the renderer. The renderer iterates over the bindings contained in the ShaderPass for both constants (uniforms), textures, and vertex attributes, and then submits those bindings to the graphics API. It then uses the index buffer to draw vertices from the bound buffers.

_______________________________________________________________

Most importantly, the bindings contained in a shader pass also indicate a usage enum (i.e. VERTEX_POSITION, LIGHT_POSITION, MODELVIEW_MATRIX, etc). This enum allows the shader writer to tag each shader input with a type of usage for that variable. If a binding is marked as a dynamic input, the renderer can optionally provide input values (constants/textures) based on scene state for the binding's usage. For instance, a LIGHT_POSITION shader input would cause the renderer to find the closest light to the object being rendered and submit its position to the rendering API. This system is really flexible and handles everything from model view and projection matrices to dynamic shadow and environment maps. Since many shader inputs depend on the dynamic scene state, this system defers input value binding until render time using automatically provided rendering state information.

I haven't really put much effort into a shader permutation system yet though... that's a future project.

Do multiple pass materiasl end up on the render queue as totally independent draw calls-stategroups pairs ? I mean, will they be sorted in a way that a pass will be executed, than lots of object passes can be executed, than the second pass can be executed later, is that right?

On crister ericsons theres a material/pass key, so I assume all objects with the same material have their pass 1 executed in a row, than all pass 2 are executed later, but then,

how things like shadow map creation works? Shadow map is a global thing right(not a material thing, objects with different materials will need a shadow too)? So I dont see how it can work with that crister ericson "per material pass sorting", as objects of a material will be drawn before object with different materials, that also cast shadows, so material 1 objects will not be affected by shadows from material 2...get what I mean? I believe all objects that cast shadows, material independent, must be sorted so that all shadow mapping creation pass is executed first. Is my logic correct?

You can consider shadow map a whole independent scene view to be rendered (different camera, different rendertarget, different culling) so to me it seems most logical that it would also have its own dedicated renderqueue. There would be higher-level logic in the renderer which renders the shadow map renderqueues first.

But yes, generally multipass materials would get broken down into independent drawcalls just like you describe.

As you can tell, there is no real definition of a material, it's just a collection of data that describes how to draw your geometry. Everybody has their own ways of implementing the material. For instance, mine is responsible for matching MaterialParameters (diffuseColor, specularColor, randomAttributes), to shader constants, and for binding textures. I have a unique material per shader, and per pass.

Perception is when one imagination clashes with another

I do all of this in the tools, during "data compilation" time.
First I compile the shaders, which tells me their cbuffer layouts (which variables are in which cbuffer structures, and which slots/stagse each structure should be bound to).
Then I parse the artists material descriptions (which for me, are in the COLLADA files), and use the above structure information to create byte-arrays of cbuffer data.
Then I create binding commands, to bind these structures to the appropriate slots/stages (e.g. bind cbuffer structure #1 to pixel shader slot #3).
Then I save these cbuffers and binding commands into a "material file", which contains links to other resource files (textures, shaders, etc) and contains StateGroups and cbuffers to be used as "materials".

Are your binding commands immutable? You create it once and they never change/get updated (like a binding command changing the slot it binds to)?

considering something like this:


stategroup environment - bind lights cbuffer (needs current shader slot info)
stategroup camera - bind viewport, bind render target, bind view projection cbuffer (needs current shader slot info)
drawable{
    stategroup material - bind blend state, bind shaders, bind textures (needs its shader slot info), bind color cbuffer (needs its shader slot info), bind sampler (needs its shader slot info)
    state bind world cbuffer (needs current shader slot info)
}

My difficult is managing the binding commands. Say I have 2 drawables with different materials, each material have different shaders with different cbuffers layout (consider different slots usage only..).

I cant hold binding commands for none of these groups (except the material group), because they change depending on the current shader/material bound, so Id have to create the commands at runtime based on the drawable material, this sounds terrible. So if Id only update the commands, it would look like:


drawable::draw( renderQueue, envLights, camera, whatever external thing dependent on the godam shader ){

 states = m_material.GetBindsForEnvLights(envLights);//that would update an existing bind command //or material.shader.Get.. // or material.CreateBindsForEnvLights(envLights),
 states += m_material.GetBindsForCamera(camera);
 states += m_material.GetItsOwnBinds();
 states += m_material.GetBindsForInstance(world);

renderQueue.submit( states, m_drawcall, m_drawableSortKey); //sortKey: camera, material...yupi
}

Thats what I can think of..

Keep in mind that my current batch tests are also comparing the bind commands addresses, with means duplicated bind commands will not be batched out..So I think I need to use existing binds always, but I also need to change them per "material current bound", witch doesnt make any sense..

if I update a existing bind command, it will update for all previous drawables already submited onto the queue, since its the same..Wich means I cant have, for example, 2 drawables with the same material but different world/light/camera cbuffer binding submitted on the queue.

The last solution would have a different bind command for all possible combination : cameras x materials, lights x materials, drawable worlds x materials..Unless I start to compare by operator == on the bind commands batch tests..

The only thing I can think is defining obligatory cbuffers layouts: "drawable", "camera", "environment", "other"..

That way I dont depend on the shader bound anymore...but I know its lame. =_=

Am I too off? Perhaps I should go lame for start..

(note that Im rellying on virtuals(instead of that sinister blob thing) and not doing anything data driven for simplicity)

My difficult is managing the binding commands. Say I have 2 drawables with different materials, each material have different shaders with different cbuffers layout (consider different slots usage only..).

(Are you using shader permutations? If not you should consider start using them.)

Different materials simply use different permutations of the same shader. So make the cbuffers layous the same in all permutations. (Check the 2nd paragraph of Hodgman reply to that topic).

Sure you'll be binding some data that is not needed in some cases, but you won't have to deal with the complexity that a more dynamic system would introduce. Plus number of bound constant buffers will be the same, so go with the "lame" approach and only try to optimize if you find that it's hurting performance.

This topic is closed to new replies.

Advertisement