Jump to content

  • Log In with Google      Sign In   
  • Create Account


Should Materials Contain Corresponding Shader Programs?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
20 replies to this topic

#1 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 19 July 2011 - 03:23 PM

I am working on a material system and am finding that I am coupling a material to various shader programs (its main shader program, but also others like a depth only pass). So I have a base Material class and then derive from it--for example, NormalMappedMaterial will have additional properties like a normal map texture.

Then I assign a Material to a mesh, and do something like:

void Mesh::Draw()
{
// Draws using NormalMappedMaterial which has access to the normal map and NormalMappedShader
m_material->Draw( this->GeometryBuffers );
}

I'm not sure if this is considered bad design or not. I tried separating my shader programs from materials, but I can't really get nice polymorphic behavior. If a mesh has a base Material* pointing to a NormalMappedMaterial and a base ShaderPrograms* pointing to a NormalMappedShader, I need to somehow get the normal map from the material and set it to the shader. However, at the base Material level, the material does not know it has a normal map, and at the base ShaderProgram level, it does not know it needs a normal map.
-----Quat

Sponsor:

#2 Nanoha   Members   -  Reputation: 296

Like
0Likes
Like

Posted 19 July 2011 - 04:40 PM

Take a look at how Ogre3d does it: http://www.ogre3d.org/docs/manual/manual_14.html You really need to use it to get whats going on but just taking a look might give you some ideas. It supports things like inheritance. Materials "have" shaders. You could make a normal map mbase material (which uses a certain normal mapping shader) and then inherit from it with different textures.

#3 MJP   Moderators   -  Reputation: 10545

Like
5Likes
Like

Posted 20 July 2011 - 12:30 AM

You may want to re-evaluate whether you really gain anything by having an inheritance-based class structure representing a library of materials. I look at materials as just being pure data: they have some shaders, some textures, and some constants. An application doesn't really need to know or care what specific meaning those things have (except in a few rare cases), it just needs to know how to make sure all of the materials resources get properly bound to the GPU pipeline whenever something needs to be rendered in that material. This way if you add some fancy new specular environment map you need to any special code for setting that map, you just make sure that texture (and all others) get bound to the right slots and let the shader do what it needs to with it.

#4 rdragon1   Crossbones+   -  Reputation: 1186

Like
0Likes
Like

Posted 20 July 2011 - 01:13 AM

In my engine I've chosen to be a little more flexible and completely decoupled Materials and Shaders. The link between them is by parameter name (hash)

Materials are just a list of shader parameter values, so there's a parameter name, and value
Shaders are shader code and a list of supported parameters (by name)

A Mesh has a reference to a Shader and a Material. This is the "default" way the mesh gets drawn. Each Mesh also has a private Material that are usually zero to a small number of parameters ("per-mesh overrides")

The renderer doesn't draw Meshes, it draws DrawItems. DrawItems reference a Mesh, two Materials, and a Shader. This lets gameplay code easily swap the shader or Material that's used to draw a Mesh (when you 'queue' the Mesh, just point at a different Shader, done) while still being able to pass on the material parameters. Very handy to do shadow map passes without special support, and also lets gameplay code swap shader/material or just animate per-mesh material parameters. There's two Material references in the DrawItem. Usually this is the one the Mesh referenced and then the private Mesh Material that has the overrides.

There's another Material the engine manages, it gets applied before the other two (and the two DrawItem materials can therefore override it) and contains the World, View, etc matrices as well as other global stuff like Time


#5 Krohm   Crossbones+   -  Reputation: 3015

Like
0Likes
Like

Posted 20 July 2011 - 07:33 AM

To elaborate a bit on what MJP wrote:
Let's start easy by assuming all the data is constant.
A generic shader will then need a bunch of FLOAT4, INT4, BOOLs (thinking in D3D9 terms) and texture to be bound. Ideally, you can think it as a small uniform buffer to be fetched somehow (D3D10 terms). Textures are slightly different as they work by pointer (by uint in GL) and might need special settings on the corresponding tex unit but ideally, all the shaders will consume resources from those pools.
Just ask the shader how many FLOAT4 registers it does need. Pull them out and send to the card. Repeat for INT4 and BOOL. Similar for texture. If you put the correct values in the correct slot when the shader object is built (hopefully you'll have more information available here), the whole procedure can be made opaque.

When it comes to D3D10 (or modern GL) you probably don't even need to keep track of the types associated, as long as the buffer layout is respected. Except for textures, which still go through a different route.
The thing goes a fair bit more complicated if those values are supposed to be dynamic.

#6 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 20 July 2011 - 11:39 AM

Thanks for the replies.

MJP
I look at materials as just being pure data: they have some shaders, some textures, and some constants. An application doesn't really need to know or care what specific meaning those things have (except in a few rare cases), it just needs to know how to make sure all of the materials resources get properly bound to the GPU pipeline whenever something needs to be rendered in that material.


I have a few questions about this. First, is your "material" a fat structure that has data members for every kind of parameter? Or a dynamic list of key,value pair? Otherwise, without a base class and inheritance, how would this work? You might have DefaultMaterial and NormalMappedMaterial, GlassMaterial, etc., each having different properties. I'm assuming you have some object like RenderableObject that has a mesh and material.

Second, let's say you are rendering a mesh with some material. How do you bind the material values to the pipeline? Do you reflect on the shader to look at its parameter list? Do you use "annotations" for this. For example, reflect on the shader, find it has a texture parameter bound to slot s with annotation "NORMALMAP", then pick the normal map SRV from your material and bind it?

If so this seems nice but expensive mapping to do at runtime. Also, I still wonder about my first question if the Material struct is fat.

rdragon1
Materials are just a list of shader parameter values, so there's a parameter name, and value
Shaders are shader code and a list of supported parameters (by name)


So for the material class, it is like

class Material
{
map<string, resource*> params;
...
};

?

Just ask the shader how many FLOAT4 registers it does need. Pull them out and send to the card. Repeat for INT4 and BOOL. Similar for texture. If you put the correct values in the correct slot when the shader object is built (hopefully you'll have more information available here), the whole procedure can be made opaque.


So basically, at initialization, reflect on each shader and store some info about the parameters it takes. Then at runtime, loop over the material properties and bind them to the shader, and hope the material has everything it needs.

Now, how do you do the parameter matching from shader slot to material entry? I suppose you could use an annotation and match strings, but this sounds expensive to do at runtime?
-----Quat

#7 MJP   Moderators   -  Reputation: 10545

Like
0Likes
Like

Posted 20 July 2011 - 01:58 PM

At work we have a material build pipeline where we reflect the shaders to get out any necessary info. Material parameters get put in their own constant buffer in the shader, so we just reflect the constant buffer to find out the proper offset for each individual material parameter. We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it. That way setting parameters just becomes memcpy + binding a constant buffer. We also make a map of parameter names -> offsets in the constant buffer, so that we can set the values of dynamic properties. For textures we just reflect the index it needs to be bound at, and then at runtime we just bind the textures to those slots.

If you move all of the reflection and looking up constants/slots stuff to preprocessing, it all becomes very quick and efficient at runtime.

#8 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 20 July 2011 - 03:08 PM

We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it.


So you have a custom struct matching the memory layout of the corresponding constant buffer?
-----Quat

#9 CornyKorn   Members   -  Reputation: 476

Like
0Likes
Like

Posted 20 July 2011 - 04:33 PM


We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it.


So you have a custom struct matching the memory layout of the corresponding constant buffer?


What it sounds like to me is they allocate memory and fill in the data using memcpy and offsets into the allocated memory based on the constant buffer layout they determined in the build pipeline. Would avoid creating custom structs... I believe.

#10 MJP   Moderators   -  Reputation: 10545

Like
0Likes
Like

Posted 21 July 2011 - 01:42 AM


We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it.


So you have a custom struct matching the memory layout of the corresponding constant buffer?


No, just a raw block of memory.

#11 Hodgman   Moderators   -  Reputation: 28452

Like
3Likes
Like

Posted 21 July 2011 - 03:01 AM

You've got a shader with certain user inputs:
....
cbuffer Basic : register(b0)
{
  float4 diffuse = float4(1,1,1,1);
  float4 ambient = float4(1,1,1,1);
}
You make a human-readable material asset of some sort to configure those inputs:
[Material]
Name = myMaterial
Shader = myShader
diffuse = { 0.8, 0.5, 0.5 }

Make a general-purpose runtime structure that can describe any kind of material settings:
struct Material
{
  const char* material;
  const char* shader;
  u32 cbufferCount;
  CBuffer* cbuffers;
};
code CBuffer
{
  u32 register;
  u32 size;
  void* data;
}
Here's what a hard-coded version of the compiled material asset file would look like:
float* b0data = { 0.8f, 0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f };
CBuffer buffers[] = { { 0, 32, b0data } };
Material myMaterial = { "myMaterial", "myShader", 1, buffers };
Except instead of hard-coding them, you'd load that data from a file (and a tool would compile the earlier text files into these binary files).

#12 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 25 July 2011 - 10:45 AM

Here's what a hard-coded version of the compiled material asset file would look like:

float* b0data = { 0.8f, 0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f };
CBuffer buffers[] = { { 0, 32, b0data } };
Material myMaterial = { "myMaterial", "myShader", 1, buffers };
Except instead of hard-coding them, you'd load that data from a file (and a tool would compile the earlier text files into these binary files).


Even though there are separate Material and Shader classes, unless I misunderstand, the material and shader are still related in that you need to reflect on the shader's cbuffer to make a generic memory chunk in the Material class that mirrors it. So with this, how do you handle special rendering passes?

For example, lets say in the main rendering pass you use some fancy shader Shader="Fancy" and fill in the material properties it needs into the generic structure that mirrors the cbuffers. Now suppose in the water reflection pass, you want to use a basic shader Shader="Basic" because the fancy effects will go unnoticed in the distorted reflection.

The data chunk allocated for the Fancy material properties is different from the cbuffer format the Basic shader wants. So how does the engine handle this? Does it map as many properties as possible to a Basic material at runtime, or does each drawable item store multiple materials (one for each possible pass the engine supports?)

Also, for textures, do you just keep an array of texture/slot pairs in your material?
-----Quat

#13 kunos   Crossbones+   -  Reputation: 2203

Like
0Likes
Like

Posted 25 July 2011 - 10:59 AM

I do pretty much what Hodgman is doing.. a material is just a set of input constants and resources to a shader program. The shader provides the cbuffer, name, location and size for every constant and resource acquired through reflection.
This way, adding a new material is just adding a new shader hlsl file to the right folder, the code doesn't know anything about normal maps, tangent spaces and so on.. it just has inputs to set and shader programs to enable.

For special passes such as shadows or "fancy shader pass :P" I have a callback interface passed in a structure I call "RenderContext" ..if the callback is null the mesh is using its material to draw itself, if the callback isnt null it calls "renderMesh(this);" on the callback Interface to request a thrid party logic render strategy to handle the state setup.
Stefano Casillo
Lead Programmer
TWITTER: @KunosStefano
AssettoCorsa - netKar PRO - Kunos Simulazioni

#14 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 25 July 2011 - 11:06 AM

For special passes such as shadows or "fancy shader pass :P" I have a callback interface passed in a structure I call "RenderContext" ..if the callback is null the mesh is using its material to draw itself, if the callback isnt null it calls "renderMesh(this);" on the callback Interface to request a thrid party logic render strategy to handle the state setup.


That might work for me. I thought of another idea just now. I have an effect file system right now, so for the Fancy shader, I could write a separate FancySimple permutation that only uses a subset of the shader parameters it needs. This way the generic material blob still mirrors the shader cbuffer.
-----Quat

#15 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 25 July 2011 - 04:52 PM

A couple more questions on setting parameters in the cbuffers....

Here's what a hard-coded version of the compiled material asset file would look like:float* b0data = { 0.8f, 0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f };
CBuffer buffers[] = { { 0, 32, b0data } };
Material myMaterial = { "myMaterial", "myShader", 1, buffers };

So creating the float* b0data like in the above requires knowledge of how the cbuffer is laid out. So this means if someone swapped two parameters in a cbuffer it would break the material loading code. Like if you are reading
diffuse = { 0.8, 0.5, 0.5 }from a [Material] file, then you need to know where that element is in the cbuffer. Or do you always match names--so the [Material] element name is the same name as the constant buffer element name so that you can match them up?

We also make a map of parameter names -> offsets in the constant buffer, so that we can set the values of dynamic properties.


So this is similar to the effects framework's "get variable by name"? Like if you wanted to update the world-view-projection matrix you would do something like:

UINT offset = myMap["worldViewProj"];
// Update cbuffer value?
-----Quat

#16 Hodgman   Moderators   -  Reputation: 28452

Like
0Likes
Like

Posted 25 July 2011 - 06:38 PM

So with this, how do you handle special rendering passes?

For example, lets say in the main rendering pass you use some fancy shader Shader="Fancy" and ... in the water reflection pass, you want to use a basic shader Shader="Basic" because the fancy effects will go unnoticed in the distorted reflection.

The data chunk allocated for the Fancy material properties is different from the cbuffer format the Basic shader wants. So how does the engine handle this? Does it map as many properties as possible to a Basic material at runtime, or does each drawable item store multiple materials (one for each possible pass the engine supports?)

I'd bundle up "fancy" and "basic" into one "shader"/"effect"/"whateveryouwanttocallit", which has multiple different "techniques"/"passes"/"programs" inside it.
There's different terminology for this -- I'll say that an Effect is an object that contains multiple Passes. A Pass is an object that contains a program for each stage of the pipeline (e.g. pixel shader and vertex shader).

The Effect itself would have a description of the cbuffers that it uses. So perhaps it's got:
cbuffer FancyParameters : register(b0) {...}
cbuffer BasicParameters : register(b1) {...}
If a material is configured to use this Effect, then it will create and bind both of those cbuffers. Then, no matter which Pass from the effect is actually chosen, it will have it's parameters available to it.

Also, for textures, do you just keep an array of texture/slot pairs in your material?

Pretty much, yep.

So creating the float* b0data like in the above requires knowledge of how the cbuffer is laid out. So this means if someone swapped two parameters in a cbuffer it would break the material loading code. Like if you are reading
diffuse = { 0.8, 0.5, 0.5 }from a [Material] file, then you need to know where that element is in the cbuffer. Or do you always match names--so the [Material] element name is the same name as the constant buffer element name so that you can match them up?

The "[Material]" file is a text file that uses names. This file is then compiled into a binary file that uses register numbers (e.g. above, FancyParameters becomes "b0", while BasicParameters becomes "b1"
) and offsets. When compiling the material files, the shader is inspected to resolve names and determine buffer sizes (e.g. diffuse is offset 0, ambient is offset 1, etc...)
The material file is dependent on the shader file, so if someone modifies the shader file, the build system will automatically re-build the binary material files using that shader.

#17 Quat   Members   -  Reputation: 403

Like
0Likes
Like

Posted 26 July 2011 - 12:28 PM

Sorry for more questions. I am trying to work out an example to convert my system to this generic material approach but am finding a problem. The material text file doesn't specify every map. For example, it doesn't mention the shadow map. But the corresponding shader will have:

Texture2D ShadowMapNameWhatever;

But when I reflect on the shader, all I will know is the name, type, and slot #. But this is not enough to tell me that it expects a shadow map, so I have no idea what to bind for this slot. Do you use a semantic system for these kind of parameters?

Texture2D ShadowMapNameWhatever : SHADOWMAP;

There are other parameters like this too. For example, shadow cascade interval positions, and shadow cascade light space transforms. The material doesn't care about these, but the shader does.

Granted, these are scene level parameters (not per object), so I guess I could make a special perFrame cbuffer, and handle that specially.
-----Quat

#18 Jason Z   Crossbones+   -  Reputation: 4846

Like
0Likes
Like

Posted 26 July 2011 - 03:34 PM

Sorry for more questions. I am trying to work out an example to convert my system to this generic material approach but am finding a problem. The material text file doesn't specify every map. For example, it doesn't mention the shadow map. But the corresponding shader will have:

Texture2D ShadowMapNameWhatever;

But when I reflect on the shader, all I will know is the name, type, and slot #. But this is not enough to tell me that it expects a shadow map, so I have no idea what to bind for this slot. Do you use a semantic system for these kind of parameters?

Texture2D ShadowMapNameWhatever : SHADOWMAP;

There are other parameters like this too. For example, shadow cascade interval positions, and shadow cascade light space transforms. The material doesn't care about these, but the shader does.

Granted, these are scene level parameters (not per object), so I guess I could make a special perFrame cbuffer, and handle that specially.


The Hieroglyph 3 engine uses what I call a parameter system to match data provided by various parts of the engine to the data that is found to be required from a shader through reflection. This essentially matches by name and type of parameter, where any object can write to the parameter system and then during rendering the proper data is read out and stored dynamically. If you want to check out a working implementation, just pull the latest copy of the repository from here.

Things to watch out for in the future would be providing for multithreading support and ensuring that access of the system only occurs at the proper times (i.e. no modifying of parameters during the actual rendering pass).

#19 Shael   Members   -  Reputation: 277

Like
0Likes
Like

Posted 27 July 2011 - 01:51 AM

I had similar confusion but after looking at the Horde3D engine it became a lot clearer to me how to manage materials and shaders and the contexts to which they're used.

Take a look here.

Basically every mesh should have a material associated with it and that material contains a shader/effect. It can also contain a number of uniforms and samplers which map to uniforms and samplers in the shader. Your material file can be whatever format you like, I used xml but you could use something simpler like Hodgman's human-readable material asset structure that he mentioned.


As for the shadow mapping, I found the simplest option was to put the sampler into a "common" shader file which can be included by all other shaders or by ones that need the shadow map. Then when doing the material pass I check if the shader for the current material has the shadow map sampler defined and if it is I bind the shadow map that was generated during the lighting pass. It is a tiny bit hardcoded but I don't see it really being a problem for the case of shadows and some other special case things.

#20 thefries   Members   -  Reputation: 103

Like
0Likes
Like

Posted 27 July 2011 - 11:03 AM

I thought I'd add to this conversation a quick description of the way that i handle these problems in my rendering pipeline - because it's slightly different to what everyone else seems to be doing.

To start, i use a fragment stitcher which, as well as stitching fragments together, generates some of the final shader code, including all of the constant and sampler definitions for the final shader. So i already know what parameters the final shader will need.

The shader fragments form a hierarchy of how they are stitched together. All of the actual data for the parameters is stored in the shader fragments themselves, so to change some of the parameter data, you parent a fragment with a new fragment, and override the parameter data in this new fragment. This makes the hierarchy structure very useful.

The material is nothing more than a list of rendering layers. Layers are just a name (z-only, opaque, distortion, blur, translucent, whatever-you-want, etc.) and shader fragment pointer pair. Later the rendering pipeline will fetch all visible objects with a particular layer defined. The fragments for these objects are then stitched with lighting (or other) fragments that represent the current lighting conditions for the individual objects.

Lights have fragments that store the light instance's data as well as the shader code to perform the lighting. The camera has fragments that perform projection into screen space. Scene objects also have fragments that transform them, perform animation, or decompress/de-normalize the mesh (unrelated to the material fragments).

All of these fragments are dynamically stitched together as needed and the compiled shader is cached, for fast lookup in dynamic conditions, but any combination of fragments can be compiled at runtime if it is needed. When rendering, the shader parameter data is read from the fragment hierarchy and copied over to the gpu.

Sorry about the rushed description, I've left out a lot of details :(




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS