• FEATURED

View more

View more

View more

Image of the Day Submit

IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Whats the shader data a Material class holds?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

22 replies to this topic

#1Icebone1000  Members

Posted 10 March 2013 - 10:07 AM

Ive been cycling between crister ericson, hodgman and spiro posts, trying to wrap my head on the subject "rendering system".
Im stuck right now on what exactly a material class does.

A material references a shader "resource", with means a combination of shaders programs (vs, ps, etc.)

This shader resources a material holds, is it a single permutation of a shader, or a material can access all of the permutation of the shader it references?

Say I have a cube with a material, if my cube changes environment, so that the number of lights change, do I need to just use another permutation on the material(material holds a permutation info), the shader itself updates its permuation info(so all material with this shader now will use the new permutation),  do I need to change the cube material (another material with the right permutation)? or do I need to update the shader on the material(so the material got a new shader)?

Also, when is that a material finds out what cbuffers slots to use? This is shader stuff, say you know what cbuffers a shader use cause youre using reflection at asset loading time, materials comes from models right? Not from HLSL files..I fail to see when things get linked up (shader permutations compilation, models loading, environment lighting info being updated to shader cbuffers..)

#2AgentC  Members

Posted 10 March 2013 - 01:03 PM

Cannot comment on the cbuffer slot issue, but I'd recommend that the material holds a reference to a "base" shader, from which permutations can be built/requested, and the rendering system chooses, during runtime, the actual shader permutation based on the object's lighting environment.

Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...

#3TiagoCosta  Members

Posted 10 March 2013 - 05:18 PM

This is how I do it (this is based on Hodgman posts):

At asset build time I assign each material a set of base "shader resources" (one for each pass the material should be drawn in).

The material chooses the right permutation of each "shader resource" based on what it needs and then, using the correct shader permutation, it finds out which cbuffers/textures slots to use using shader reflection.

A material will contain a bitset where each bit specifies a shader feature that the material needs and the Renderer will use that bitset to choose the right shader permutation every time the material is used. So you can change this list to change the material appearance by changing the bitset.

The number of lights or other external factors shouldn't modify the material, after all they're external factors.

So in my engine I have a class called Actor (basically a Model instance), that holds a pointer to a Model and vectors like position, rotation, etc.

The Actor also has a bitset, like the Materials, that will dynamically be updated based on the Actor/lights positions, so the Renderer will get the bitsets from Material/Actor/Geometry and other classes in order to choose the right shader permutation.

Edited by TiagoCosta, 10 March 2013 - 05:21 PM.

#4Hodgman  Moderators

Posted 10 March 2013 - 11:53 PM

POPULAR

In my renderer, I don't even have a material class. A material is just a bunch of data (cbuffers, textures, shaders) and a bunch of commands that bind that data to the pipeline (PSSetConstantBuffers, etc).
I have classes for resources like cbuffers/textures/etc, and I also have a class called StateGroup, which can hold commands to set rendering states (which includes binding resources).
I can use StateGroup to represent a material, as well as other things, e.g.

StateGroup objectStates;   // binds cbuffer with world matrix
StateGroup materialStates; // binds shader, sets blend mode, binds textures, binds material cbuffer
StateGroup lightingStates; // binds cbuffer with light positions
StateGroup* states[3] = { &objectStates, &materialStates, &lightingStates };
Draw( mesh, states, 3 );

This shader resources a material holds, is it a single permutation of a shader, or a material can access all of the permutation of the shader it references?

I always reference a particular "technique", which internally may have many different permutations that can be chosen by the renderer right before each draw call.

Say I have a cube with a material, if my cube changes environment, so that the number of lights change, do I need to just use another permutation on the material(material holds a permutation info), the shader itself updates its permuation info(so all material with this shader now will use the new permutation),  do I need to change the cube material (another material with the right permutation)? or do I need to update the shader on the material(so the material got a new shader)?

If the lighting environment has changed, I wouldn't make any changes to the material or the shader. The material references a particular shader, and that shader contains techniques for different lighting enironments.
When drawing the cube, with this material/shader/lighting environment, the renderer can select the appropriate premutation at the last moment, when it has all this information available to it.

Also, when is that a material finds out what cbuffers slots to use? This is shader stuff, say you know what cbuffers a shader use cause youre using reflection at asset loading time, materials comes from models right? Not from HLSL files..I fail to see when things get linked up (shader permutations compilation, models loading, environment lighting info being updated to shader cbuffers..)

I do all of this in the tools, during "data compilation" time.
First I compile the shaders, which tells me their cbuffer layouts (which variables are in which cbuffer structures, and which slots/stagse each structure should be bound to).
Then I parse the artists material descriptions (which for me, are in the COLLADA files), and use the above structure information to create byte-arrays of cbuffer data.
Then I create binding commands, to bind these structures to the appropriate slots/stages (e.g. bind cbuffer structure #1 to pixel shader slot #3).
Then I save these cbuffers and binding commands into a "material file", which contains links to other resource files (textures, shaders, etc) and contains StateGroups and cbuffers to be used as "materials".

Edited by Hodgman, 10 March 2013 - 11:54 PM.

#5Aressera  Members

Posted 11 March 2013 - 12:44 AM

In my renderer, the base material class is called Material and it contains a set of MaterialTechnique objects that describe particular ways that an object can be rendered (i.e. forward rendering, depth-only, deferred g-buffer pass, etc.). Each MaterialTechnique contains an ordered list of ShaderPass objects that totally define all inputs to the shader for any number of passes.

The result of storing vertex data in the ShaderPass class is that classes like StaticMesh end up just being a pointer to a Material and an index buffer.

At render time, a ShaderPass and an index buffer is given to the renderer. The renderer iterates over the bindings contained in the ShaderPass for both constants (uniforms), textures, and vertex attributes, and then submits those bindings to the graphics API. It then uses the index buffer to draw vertices from the bound buffers.

_______________________________________________________________

Most importantly, the bindings contained in a shader pass also indicate a usage enum (i.e. VERTEX_POSITION, LIGHT_POSITION, MODELVIEW_MATRIX, etc). This enum allows the shader writer to tag each shader input with a type of usage for that variable. If a binding is marked as a dynamic input, the renderer can optionally provide input values (constants/textures) based on scene state for the binding's usage. For instance, a LIGHT_POSITION shader input would cause the renderer to find the closest light to the object being rendered and submit its position to the rendering API. This system is really flexible and handles everything from model view and projection matrices to dynamic shadow and environment maps. Since many shader inputs depend on the dynamic scene state, this system defers input value binding until render time using automatically provided rendering state information.

I haven't really put much effort into a shader permutation system yet though... that's a future project.

#6Icebone1000  Members

Posted 11 March 2013 - 06:03 AM

Do multiple pass  materiasl end up on the render queue as totally independent draw calls-stategroups pairs ? I mean, will they be sorted in a way that a pass will be executed, than lots of object passes can be executed, than the second pass can be executed later, is that right?

On crister ericsons theres a material/pass key, so I assume all objects with the same material have their pass 1 executed in a row, than all pass 2 are executed later, but then,

how things like shadow map creation works? Shadow map is a global thing right(not a material thing, objects with different materials will need a shadow too)? So I dont see how it can work with that crister ericson "per material pass sorting", as objects of a material will be drawn before object with different materials, that also cast shadows, so material 1 objects will not be affected by shadows from material 2...get what I mean? I believe all objects that cast shadows, material independent, must be sorted so that all shadow mapping creation pass is executed first. Is my logic correct?

Edited by Icebone1000, 11 March 2013 - 06:04 AM.

#7AgentC  Members

Posted 11 March 2013 - 06:47 AM

You can consider shadow map a whole independent scene view to be rendered (different camera, different rendertarget, different culling) so to me it seems most logical that it would also have its own dedicated renderqueue. There would be higher-level logic in the renderer which renders the shadow map renderqueues first.

But yes, generally multipass materials would get broken down into independent drawcalls just like you describe.

Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...

#8Seabolt  Members

Posted 11 March 2013 - 08:44 AM

As you can tell, there is no real definition of a material, it's just a collection of data that describes how to draw your geometry. Everybody has their own ways of implementing the material. For instance, mine is responsible for matching MaterialParameters (diffuseColor, specularColor, randomAttributes), to shader constants, and for binding textures. I have a unique material per shader, and per pass.

Perception is when one imagination clashes with another

#9Icebone1000  Members

Posted 22 March 2013 - 05:26 PM

I do all of this in the tools, during "data compilation" time.
First I compile the shaders, which tells me their cbuffer layouts (which variables are in which cbuffer structures, and which slots/stagse each structure should be bound to).
Then I parse the artists material descriptions (which for me, are in the COLLADA files), and use the above structure information to create byte-arrays of cbuffer data.
Then I create binding commands, to bind these structures to the appropriate slots/stages (e.g. bind cbuffer structure #1 to pixel shader slot #3).
Then I save these cbuffers and binding commands into a "material file", which contains links to other resource files (textures, shaders, etc) and contains StateGroups and cbuffers to be used as "materials".

Are your binding commands immutable? You create it once and they never change/get updated (like a binding command changing the slot it binds to)?

considering something like this:

stategroup environment - bind lights cbuffer (needs current shader slot info)
stategroup camera - bind viewport, bind render target, bind view projection cbuffer (needs current shader slot info)
drawable{
stategroup material - bind blend state, bind shaders, bind textures (needs its shader slot info), bind color cbuffer (needs its shader slot info), bind sampler (needs its shader slot info)
state bind world cbuffer (needs current shader slot info)
}

My difficult is managing the binding commands. Say I have 2 drawables with different materials, each material have different shaders with different cbuffers layout (consider different slots usage only..).

I cant hold binding commands for none of these groups (except the material group), because they change depending on the current shader/material bound, so Id have to create the commands at runtime based on the drawable material, this sounds terrible. So if Id only update the commands, it would look like:

drawable::draw( renderQueue, envLights, camera, whatever external thing dependent on the godam shader ){

states = m_material.GetBindsForEnvLights(envLights);//that would update an existing bind command //or material.shader.Get.. // or material.CreateBindsForEnvLights(envLights),
states += m_material.GetBindsForCamera(camera);
states += m_material.GetItsOwnBinds();
states += m_material.GetBindsForInstance(world);

renderQueue.submit( states, m_drawcall, m_drawableSortKey); //sortKey: camera, material...yupi
}

Thats what I can think of..

Keep in mind that my current batch tests are also comparing the bind commands addresses, with means duplicated bind commands will not be batched out..So I think I need to use existing binds always, but I also need to change them per "material current bound", witch doesnt make any sense..

if I update a existing bind command, it will update for all previous drawables already submited onto the queue, since its the same..Wich means I cant have, for example, 2 drawables with the same material but different  world/light/camera cbuffer binding submitted on the queue.

The last solution would have a different bind command for all possible combination : cameras x materials, lights x materials, drawable worlds x materials..Unless I start to compare by operator == on the bind commands batch tests..

The only thing I can think is defining obligatory cbuffers layouts: "drawable", "camera", "environment", "other"..

That way I dont depend on the shader bound anymore...but I know its lame. =_=

Am I too off? Perhaps I should go lame for start..

(note that Im rellying on virtuals(instead of that sinister blob thing) and not doing anything data driven for simplicity)

#10TiagoCosta  Members

Posted 22 March 2013 - 05:50 PM

My difficult is managing the binding commands. Say I have 2 drawables with different materials, each material have different shaders with different cbuffers layout (consider different slots usage only..).

(Are you using shader permutations? If not you should consider start using them.)

Different materials simply use different permutations of the same shader. So make the cbuffers layous the same in all permutations. (Check the 2nd paragraph of Hodgman reply to that topic).

Sure you'll be binding some data that is not needed in some cases, but you won't have to deal with the complexity that a more dynamic system would introduce. Plus number of bound constant buffers will be the same, so go with the "lame" approach and only try to optimize if you find that it's hurting performance.

#11Hodgman  Moderators

Posted 22 March 2013 - 11:14 PM

1) Are your binding commands immutable?
2) My difficult is managing the binding commands. Say I have 2 drawables with different materials, each material have different shaders with different cbuffers layout (consider different slots usage only..).
2.a) bind view projection cbuffer (needs current shader slot info)
2.b) bind textures (needs its shader slot info)
3) I cant hold binding commands for none of these groups (except the material group), because they change depending on the current shader/material bound.

1) Yes.

2) I choose to simply be explicit with my cbuffer slot assignments. e.g. across different shaders that will be used in the same stage in the same way (e.g. opaque/gbuffer rendering), I'll choose to assign their camera cbuffer always to slot #1. I do this by defining the camera cbuffer in a header (which contains the register HLSL keyword), which is included into the various shaders.

2.a) If you've got the ability to enforce that every 'camera' cbuffer contains the same layout across shaders, then you've also got the ability to enforce the same register numbering scheme across shaders.
2.b) The textures that need to be bound and the shader that will be used by a material are both known at data-build-time, so assuming that the material is immutable, then there's no issue there.

 if it's a runtime generated texture, (for post processing, or a deferred shadow mask in a forward renderer, etc), then I do the same thing as (2.a) -- I enforce that every shader specifies this texture binding on the same slot/register number, so that the same binding command can be used across shaders [/edit]

3) If I were to support that case, then the cbuffer binding command would contain a string and a cbuffer data pointer, and the string would be converted to a slot# at draw-time. This is pretty inefficient though, so I don't do this and instead do the above.

Edited by Hodgman, 23 March 2013 - 10:53 PM.

#12Icebone1000  Members

Posted 23 March 2013 - 08:07 AM

My difficult is managing the binding commands. Say I have 2 drawables with different materials, each material have different shaders with different cbuffers layout (consider different slots usage only..).

(Are you using shader permutations? If not you should consider start using them.)

Different materials simply use different permutations of the same shader. So make the cbuffers layous the same in all permutations. (Check the 2nd paragraph of Hodgman reply to that topic).

Sure you'll be binding some data that is not needed in some cases, but you won't have to deal with the complexity that a more dynamic system would introduce. Plus number of bound constant buffers will be the same, so go with the "lame" approach and only try to optimize if you find that it's hurting performance.

I meant materials with completely different shader effects, not just different permutations for the same effect.

In my understanding, you`re not meant to use the same shader for all objects. Do you do something like different queues for different shaders?

Edited by Icebone1000, 23 March 2013 - 08:07 AM.

Posted 25 March 2013 - 01:24 PM

This is an interesting thread - I'm getting to my material system very soon - you guys keep mentioning the 'cbuffer' - what exactly is that?

Apologies for piggy backing this one but it might help me and others to understand some of the terminology. Also, what's a 'state group'?

Thanks

#14Icebone1000  Members

Posted 25 March 2013 - 07:11 PM

This is an interesting thread - I'm getting to my material system very soon - you guys keep mentioning the 'cbuffer' - what exactly is that?

Apologies for piggy backing this one but it might help me and others to understand some of the terminology. Also, what's a 'state group'?

Thanks

cbuffer(constant buffer) is how hlsl shader language handles variables (the ones that will be constant during all vertex/pixel/any shader program processing, like world view projection matrices, color, anything you set for a specific vertex buffer and a draw call), theyr like a struct you create in your shader file, bind to a specific register "index". Check dx documentation for more details.

State group is specific to Hodgman rendering system, it refers to a list of "pipeline states setters", commands that set things like cbuffers, blend state, shaders, texture..etc.

check this post:

http://www.gamedev.net/topic/605065-renderqueue-design-theory-and-implementation/#entry4828468

One can gather a great amount of info by just following Hodgman footprints on these forums.

#15Hodgman  Moderators

Posted 25 March 2013 - 07:39 PM

If you're in OpenGL land, cbuffers are called UBOs (a buffer object that holds uniforms).

State group is specific to Hodgman rendering system, it refers to a list of "pipeline states setters", commands that set things like cbuffers, blend state, shaders, texture..etc

Yeah I use the concept in my engine (described above), but I picked it up numerous other engines that also used the same idea.

Also, in D3D9/10, there are API level "state blocks", which are similar (a container for certain Set<State> commands), but I don't use these

Posted 26 March 2013 - 01:42 AM

Thanks guys..

cbuffer(constant buffer) is how hlsl shader language handles variables (the ones that will be constant during all vertex/pixel/any shader program processing, like world view projection matrices, color, anything you set for a specific vertex buffer and a draw call), theyr like a struct you create in your shader file, bind to a specific register "index". Check dx documentation for more details.

So where I just set things like worldviewmatrix and viewprojectionmatrix separately, a cbuffer means you can bundle all that up into one structure and send the whole thing to the shader - I see...

Also, in D3D9/10, there are API level "state blocks", which are similar (a container for certain Set<State> commands), but I don't use these

Thanks, got it now...  back to the story, gents

Posted 26 March 2013 - 08:54 AM

Cbuffers are sm4.0+ only aren't they? Meaning DX10+?

#18Hodgman  Moderators

Posted 26 March 2013 - 08:58 AM

Cbuffers are sm4.0+ only aren't they? Meaning DX10+?

Yes. On SM3, I emulate them, which is actually more efficient for me than using something like the Microsoft Effects framework to manage my constants...

#19Weton  Members

Posted 27 March 2013 - 08:06 AM

But how do you update the cbuffers, if a object moves/is animated etc.?

I think the cbuffer could either be a part of the object, but this would force all types of objects to have a common layout.

Or do you have some kind of offset or pointer array from which you copy the data into the buffers, before you draw the objects? This would allow having different data layouts, but would cause lots of copying of data...

#20Hodgman  Moderators

Posted 27 March 2013 - 08:34 AM

If data is being generated by the engine and then input into a shader, then yes, the shader and the engine need to agree upon the layout of that data.

For a simple object, this might be some standard cbuffer that just contains a 'world' matrix - this is basically the objects 'render transform component'. For an animated object it might be a cbuffer containing an array of bone matrices.

You might have another standardized cbuffer that contains the view-proj matrix that's owned by the camera, etc.

Not every cbuffer layout needs to be standardized like this, just ones that contain data that's generated by the engine, like matrices.

Often material settings (colours, texture scales, etc) can be set just once, by looking up a variables location within a cbuffer by name. In my engine, I build these kinds of material cbuffers offline using a tool (which reads the values from an artist's DAE file, and matches them up with the cbuffer layout written in the shader code), but you can also do this at runtime when necessary.

Edited by Hodgman, 27 March 2013 - 08:46 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.