Should Materials Contain Corresponding Shader Programs?

Started by
19 comments, last by Quat 12 years, 9 months ago
You've got a shader with certain user inputs:....
cbuffer Basic : register(b0)
{
float4 diffuse = float4(1,1,1,1);
float4 ambient = float4(1,1,1,1);
}

You make a human-readable material asset of some sort to configure those inputs:[Material]
Name = myMaterial
Shader = myShader
diffuse = { 0.8, 0.5, 0.5 }


Make a general-purpose runtime structure that can describe any kind of material settings:struct Material
{
const char* material;
const char* shader;
u32 cbufferCount;
CBuffer* cbuffers;
};
code CBuffer
{
u32 register;
u32 size;
void* data;
}

Here's what a hard-coded version of the compiled material asset file would look like:float* b0data = { 0.8f, 0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f };
CBuffer buffers[] = { { 0, 32, b0data } };
Material myMaterial = { "myMaterial", "myShader", 1, buffers };
Except instead of hard-coding them, you'd load that data from a file (and a tool would compile the earlier text files into these binary files).
Advertisement

Here's what a hard-coded version of the compiled material asset file would look like:float* b0data = { 0.8f, 0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f };
CBuffer buffers[] = { { 0, 32, b0data } };
Material myMaterial = { "myMaterial", "myShader", 1, buffers };
Except instead of hard-coding them, you'd load that data from a file (and a tool would compile the earlier text files into these binary files).


Even though there are separate Material and Shader classes, unless I misunderstand, the material and shader are still related in that you need to reflect on the shader's cbuffer to make a generic memory chunk in the Material class that mirrors it. So with this, how do you handle special rendering passes?

For example, lets say in the main rendering pass you use some fancy shader Shader="Fancy" and fill in the material properties it needs into the generic structure that mirrors the cbuffers. Now suppose in the water reflection pass, you want to use a basic shader Shader="Basic" because the fancy effects will go unnoticed in the distorted reflection.

The data chunk allocated for the Fancy material properties is different from the cbuffer format the Basic shader wants. So how does the engine handle this? Does it map as many properties as possible to a Basic material at runtime, or does each drawable item store multiple materials (one for each possible pass the engine supports?)

Also, for textures, do you just keep an array of texture/slot pairs in your material?
-----Quat
I do pretty much what Hodgman is doing.. a material is just a set of input constants and resources to a shader program. The shader provides the cbuffer, name, location and size for every constant and resource acquired through reflection.
This way, adding a new material is just adding a new shader hlsl file to the right folder, the code doesn't know anything about normal maps, tangent spaces and so on.. it just has inputs to set and shader programs to enable.

For special passes such as shadows or "fancy shader pass :P" I have a callback interface passed in a structure I call "RenderContext" ..if the callback is null the mesh is using its material to draw itself, if the callback isnt null it calls "renderMesh(this);" on the callback Interface to request a thrid party logic render strategy to handle the state setup.

Stefano Casillo
TWITTER: [twitter]KunosStefano[/twitter]
AssettoCorsa - netKar PRO - Kunos Simulazioni


For special passes such as shadows or "fancy shader pass :P" I have a callback interface passed in a structure I call "RenderContext" ..if the callback is null the mesh is using its material to draw itself, if the callback isnt null it calls "renderMesh(this);" on the callback Interface to request a thrid party logic render strategy to handle the state setup.


That might work for me. I thought of another idea just now. I have an effect file system right now, so for the Fancy shader, I could write a separate FancySimple permutation that only uses a subset of the shader parameters it needs. This way the generic material blob still mirrors the shader cbuffer.
-----Quat
A couple more questions on setting parameters in the cbuffers....


Here's what a hard-coded version of the compiled material asset file would look like:float* b0data = { 0.8f, 0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f };
CBuffer buffers[] = { { 0, 32, b0data } };
Material myMaterial = { "myMaterial", "myShader", 1, buffers };
[/quote]
So creating the float* b0data like in the above requires knowledge of how the cbuffer is laid out. So this means if someone swapped two parameters in a cbuffer it would break the material loading code. Like if you are reading
diffuse = { 0.8, 0.5, 0.5 }from a [Material] file, then you need to know where that element is in the cbuffer. Or do you always match names--so the [Material] element name is the same name as the constant buffer element name so that you can match them up?


We also make a map of parameter names -> offsets in the constant buffer, so that we can set the values of dynamic properties.
[/quote]

So this is similar to the effects framework's "get variable by name"? Like if you wanted to update the world-view-projection matrix you would do something like:

UINT offset = myMap["worldViewProj"];
// Update cbuffer value?
-----Quat

So with this, how do you handle special rendering passes?

For example, lets say in the main rendering pass you use some fancy shader Shader="Fancy" and ... in the water reflection pass, you want to use a basic shader Shader="Basic" because the fancy effects will go unnoticed in the distorted reflection.

The data chunk allocated for the Fancy material properties is different from the cbuffer format the Basic shader wants. So how does the engine handle this? Does it map as many properties as possible to a Basic material at runtime, or does each drawable item store multiple materials (one for each possible pass the engine supports?)
I'd bundle up "fancy" and "basic" into one "shader"/"effect"/"whateveryouwanttocallit", which has multiple different "techniques"/"passes"/"programs" inside it.
There's different terminology for this -- I'll say that an Effect is an object that contains multiple Passes. A Pass is an object that contains a program for each stage of the pipeline (e.g. pixel shader and vertex shader).

The Effect itself would have a description of the cbuffers that it uses. So perhaps it's got:cbuffer FancyParameters : register(b0) {...}
cbuffer BasicParameters : register(b1) {...}
If a material is configured to use this Effect, then it will create and bind both of those cbuffers. Then, no matter which Pass from the effect is actually chosen, it will have it's parameters available to it.
Also, for textures, do you just keep an array of texture/slot pairs in your material?[/quote]Pretty much, yep.
[font="arial, verdana, tahoma, sans-serif"]

So creating the float* b0data like in the above requires knowledge of how the cbuffer is laid out. So this means if someone swapped two parameters in a cbuffer it would break the material loading code. Like if you are reading
diffuse = { 0.8, 0.5, 0.5 }from a [Material] file, then you need to know where that element is in the cbuffer. Or do you always match names--so the [Material] element name is the same name as the constant buffer element name so that you can match them up?
The "[Material]" file is a text file that uses names. This file is then compiled into a binary file that uses register numbers (e.g. above, FancyParameters becomes "b0", while BasicParameters becomes "b1"[/font][font="arial, verdana, tahoma, sans-serif"]) and offsets. When compiling the material files, the shader is inspected to resolve names and determine buffer sizes (e.g. diffuse is offset 0, ambient is offset 1, etc...)[/font]
[font="arial, verdana, tahoma, sans-serif"]The material file is dependent on the shader file, so if someone modifies the shader file, the build system will automatically re-build the binary material files using that shader.[/font]
Sorry for more questions. I am trying to work out an example to convert my system to this generic material approach but am finding a problem. The material text file doesn't specify every map. For example, it doesn't mention the shadow map. But the corresponding shader will have:

Texture2D ShadowMapNameWhatever;

But when I reflect on the shader, all I will know is the name, type, and slot #. But this is not enough to tell me that it expects a shadow map, so I have no idea what to bind for this slot. Do you use a semantic system for these kind of parameters?

Texture2D ShadowMapNameWhatever : SHADOWMAP;

There are other parameters like this too. For example, shadow cascade interval positions, and shadow cascade light space transforms. The material doesn't care about these, but the shader does.

Granted, these are scene level parameters (not per object), so I guess I could make a special perFrame cbuffer, and handle that specially.
-----Quat

Sorry for more questions. I am trying to work out an example to convert my system to this generic material approach but am finding a problem. The material text file doesn't specify every map. For example, it doesn't mention the shadow map. But the corresponding shader will have:

Texture2D ShadowMapNameWhatever;

But when I reflect on the shader, all I will know is the name, type, and slot #. But this is not enough to tell me that it expects a shadow map, so I have no idea what to bind for this slot. Do you use a semantic system for these kind of parameters?

Texture2D ShadowMapNameWhatever : SHADOWMAP;

There are other parameters like this too. For example, shadow cascade interval positions, and shadow cascade light space transforms. The material doesn't care about these, but the shader does.

Granted, these are scene level parameters (not per object), so I guess I could make a special perFrame cbuffer, and handle that specially.


The Hieroglyph 3 engine uses what I call a parameter system to match data provided by various parts of the engine to the data that is found to be required from a shader through reflection. This essentially matches by name and type of parameter, where any object can write to the parameter system and then during rendering the proper data is read out and stored dynamically. If you want to check out a working implementation, just pull the latest copy of the repository from here.

Things to watch out for in the future would be providing for multithreading support and ensuring that access of the system only occurs at the proper times (i.e. no modifying of parameters during the actual rendering pass).
I had similar confusion but after looking at the Horde3D engine it became a lot clearer to me how to manage materials and shaders and the contexts to which they're used.

Take a look here.

Basically every mesh should have a material associated with it and that material contains a shader/effect. It can also contain a number of uniforms and samplers which map to uniforms and samplers in the shader. Your material file can be whatever format you like, I used xml but you could use something simpler like Hodgman's [color=#1C2837][size=2]human-readable material asset structure that he mentioned.
[color=#1C2837][size=2]

[color=#1C2837][size=2]As for the shadow mapping, I found the simplest option was to put the sampler into a "common" shader file which can be included by all other shaders or by ones that need the shadow map. Then when doing the material pass I check if the shader for the current material has the shadow map sampler defined and if it is I bind the shadow map that was generated during the lighting pass. It is a tiny bit hardcoded but I don't see it really being a problem for the case of shadows and some other special case things.
I thought I'd add to this conversation a quick description of the way that i handle these problems in my rendering pipeline - because it's slightly different to what everyone else seems to be doing.

To start, i use a fragment stitcher which, as well as stitching fragments together, generates some of the final shader code, including all of the constant and sampler definitions for the final shader. So i already know what parameters the final shader will need.

The shader fragments form a hierarchy of how they are stitched together. All of the actual data for the parameters is stored in the shader fragments themselves, so to change some of the parameter data, you parent a fragment with a new fragment, and override the parameter data in this new fragment. This makes the hierarchy structure very useful.

The material is nothing more than a list of rendering layers. Layers are just a name (z-only, opaque, distortion, blur, translucent, whatever-you-want, etc.) and shader fragment pointer pair. Later the rendering pipeline will fetch all visible objects with a particular layer defined. The fragments for these objects are then stitched with lighting (or other) fragments that represent the current lighting conditions for the individual objects.

Lights have fragments that store the light instance's data as well as the shader code to perform the lighting. The camera has fragments that perform projection into screen space. Scene objects also have fragments that transform them, perform animation, or decompress/de-normalize the mesh (unrelated to the material fragments).

All of these fragments are dynamically stitched together as needed and the compiled shader is cached, for fast lookup in dynamic conditions, but any combination of fragments can be compiled at runtime if it is needed. When rendering, the shader parameter data is read from the fragment hierarchy and copied over to the gpu.

Sorry about the rushed description, I've left out a lot of details :(

This topic is closed to new replies.

Advertisement