Should Materials Contain Corresponding Shader Programs?

Started by
19 comments, last by Quat 12 years, 8 months ago
I am working on a material system and am finding that I am coupling a material to various shader programs (its main shader program, but also others like a depth only pass). So I have a base Material class and then derive from it--for example, NormalMappedMaterial will have additional properties like a normal map texture.

Then I assign a Material to a mesh, and do something like:

void Mesh::Draw()
{
// Draws using NormalMappedMaterial which has access to the normal map and NormalMappedShader
m_material->Draw( this->GeometryBuffers );
}

I'm not sure if this is considered bad design or not. I tried separating my shader programs from materials, but I can't really get nice polymorphic behavior. If a mesh has a base Material* pointing to a NormalMappedMaterial and a base ShaderPrograms* pointing to a NormalMappedShader, I need to somehow get the normal map from the material and set it to the shader. However, at the base Material level, the material does not know it has a normal map, and at the base ShaderProgram level, it does not know it needs a normal map.
-----Quat
Advertisement
Take a look at how Ogre3d does it: http://www.ogre3d.org/docs/manual/manual_14.html You really need to use it to get whats going on but just taking a look might give you some ideas. It supports things like inheritance. Materials "have" shaders. You could make a normal map mbase material (which uses a certain normal mapping shader) and then inherit from it with different textures.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

You may want to re-evaluate whether you really gain anything by having an inheritance-based class structure representing a library of materials. I look at materials as just being pure data: they have some shaders, some textures, and some constants. An application doesn't really need to know or care what specific meaning those things have (except in a few rare cases), it just needs to know how to make sure all of the materials resources get properly bound to the GPU pipeline whenever something needs to be rendered in that material. This way if you add some fancy new specular environment map you need to any special code for setting that map, you just make sure that texture (and all others) get bound to the right slots and let the shader do what it needs to with it.
In my engine I've chosen to be a little more flexible and completely decoupled Materials and Shaders. The link between them is by parameter name (hash)

Materials are just a list of shader parameter values, so there's a parameter name, and value
Shaders are shader code and a list of supported parameters (by name)

A Mesh has a reference to a Shader and a Material. This is the "default" way the mesh gets drawn. Each Mesh also has a private Material that are usually zero to a small number of parameters ("per-mesh overrides")

The renderer doesn't draw Meshes, it draws DrawItems. DrawItems reference a Mesh, two Materials, and a Shader. This lets gameplay code easily swap the shader or Material that's used to draw a Mesh (when you 'queue' the Mesh, just point at a different Shader, done) while still being able to pass on the material parameters. Very handy to do shadow map passes without special support, and also lets gameplay code swap shader/material or just animate per-mesh material parameters. There's two Material references in the DrawItem. Usually this is the one the Mesh referenced and then the private Mesh Material that has the overrides.

There's another Material the engine manages, it gets applied before the other two (and the two DrawItem materials can therefore override it) and contains the World, View, etc matrices as well as other global stuff like Time
To elaborate a bit on what MJP wrote:
Let's start easy by assuming all the data is constant.
A generic shader will then need a bunch of FLOAT4, INT4, BOOLs (thinking in D3D9 terms) and texture to be bound. Ideally, you can think it as a small uniform buffer to be fetched somehow (D3D10 terms). Textures are slightly different as they work by pointer (by uint in GL) and might need special settings on the corresponding tex unit but ideally, all the shaders will consume resources from those pools.
Just ask the shader how many FLOAT4 registers it does need. Pull them out and send to the card. Repeat for INT4 and BOOL. Similar for texture. If you put the correct values in the correct slot when the shader object is built (hopefully you'll have more information available here), the whole procedure can be made opaque.

When it comes to D3D10 (or modern GL) you probably don't even need to keep track of the types associated, as long as the buffer layout is respected. Except for textures, which still go through a different route.
The thing goes a fair bit more complicated if those values are supposed to be dynamic.

Previously "Krohm"

Thanks for the replies.


MJP
I look at materials as just being pure data: they have some shaders, some textures, and some constants. An application doesn't really need to know or care what specific meaning those things have (except in a few rare cases), it just needs to know how to make sure all of the materials resources get properly bound to the GPU pipeline whenever something needs to be rendered in that material.
[/quote]

I have a few questions about this. First, is your "material" a fat structure that has data members for every kind of parameter? Or a dynamic list of key,value pair? Otherwise, without a base class and inheritance, how would this work? You might have DefaultMaterial and NormalMappedMaterial, GlassMaterial, etc., each having different properties. I'm assuming you have some object like RenderableObject that has a mesh and material.

Second, let's say you are rendering a mesh with some material. How do you bind the material values to the pipeline? Do you reflect on the shader to look at its parameter list? Do you use "annotations" for this. For example, reflect on the shader, find it has a texture parameter bound to slot s with annotation "NORMALMAP", then pick the normal map SRV from your material and bind it?

If so this seems nice but expensive mapping to do at runtime. Also, I still wonder about my first question if the Material struct is fat.


rdragon1
Materials are just a list of shader parameter values, so there's a parameter name, and value
Shaders are shader code and a list of supported parameters (by name)
[/quote]

So for the material class, it is like

class Material
{
map<string, resource*> params;
...
};

?


Just ask the shader how many FLOAT4 registers it does need. Pull them out and send to the card. Repeat for INT4 and BOOL. Similar for texture. If you put the correct values in the correct slot when the shader object is built (hopefully you'll have more information available here), the whole procedure can be made opaque.
[/quote]

So basically, at initialization, reflect on each shader and store some info about the parameters it takes. Then at runtime, loop over the material properties and bind them to the shader, and hope the material has everything it needs.

Now, how do you do the parameter matching from shader slot to material entry? I suppose you could use an annotation and match strings, but this sounds expensive to do at runtime?
-----Quat
At work we have a material build pipeline where we reflect the shaders to get out any necessary info. Material parameters get put in their own constant buffer in the shader, so we just reflect the constant buffer to find out the proper offset for each individual material parameter. We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it. That way setting parameters just becomes memcpy + binding a constant buffer. We also make a map of parameter names -> offsets in the constant buffer, so that we can set the values of dynamic properties. For textures we just reflect the index it needs to be bound at, and then at runtime we just bind the textures to those slots.

If you move all of the reflection and looking up constants/slots stuff to preprocessing, it all becomes very quick and efficient at runtime.

We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it.


So you have a custom struct matching the memory layout of the corresponding constant buffer?
-----Quat

[quote name='MJP' timestamp='1311191928' post='4838125']
We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it.


So you have a custom struct matching the memory layout of the corresponding constant buffer?
[/quote]

What it sounds like to me is they allocate memory and fill in the data using memcpy and offsets into the allocated memory based on the constant buffer layout they determined in the build pipeline. Would avoid creating custom structs... I believe.

[quote name='MJP' timestamp='1311191928' post='4838125']
We then create a memory block matching the memory layout of that constant buffer, then at runtime we create a constant buffer with the appropriate size and just copy the data into it.


So you have a custom struct matching the memory layout of the corresponding constant buffer?
[/quote]

No, just a raw block of memory.

This topic is closed to new replies.

Advertisement