Material and Shader Systems

Started by
8 comments, last by Jman2 5 years, 10 months ago

Hello,

Im wondering if anyone has examples of how they handle material and shader systems,  particularly in the association of uniform/constant buffers with the correct slots. Currently i define a Material file in xml (it could be json or anything else just xml for now):
 


<MaterialFile>
	<Material name="BasicPBS" type="0">
	<!-- States-->
		<samplers>
			<sampler_0>PointWrap</sampler_0>
		</samplers>
		<rasterblock>CullCounterClockwise</rasterblock>
		<blendblock>Opaque</blendblock>
		
	<!-- Properties -->	
		<Properties>
			<color r="1" g="1" b="1" a="1"></color><!-- tint color -->
			
			<diffuse>
				<texture>SnookerBall.png</texture>
				<sampler>Sampler_0</sampler>
			</diffuse>
			
			<specular>
				<value r="1" g="1" b="1" a="1"></value>
			</specular>
			
			<metalness>
				<value>0</value>
			</metalness>
			
			<roughness>
				<value>0.07</value>
			</roughness>
		</Properties> 
	</Material>
</MaterialFile>

The Sampler, Raster and Blend Blocks are all common reusable blocks that a material cpp can point too based on the ones within the xml, it is similar to xna and dxtk:
https://github.com/Microsoft/DirectXTK/wiki/CommonStates

Common Data such as the camera view, projection and viewProjection matrix can be stored in a single PerRender  Constant and the meshes data: world, worldProjection etc. can be stored in a PerObject constant buffer that can be set by a world/render manager each time it draws an object.

The material will also obviously store a reference to the shader that has been loaded by the shader manager, and reused if its already loaded etc.

That leaves the material properties, how can they be handled in a dynamic way? or will you have to explicitly create individual material classes that use material as a base and override using virtual's (yuk) to achieve setting the individual property types? for example you can have materials that have a texture for diffuse but just values for roughness and metallic, or yuo can have one with all maps etc. this means there has to be permutations. How can that be handled in a clean way?

Thanks :)

Advertisement

Hello,

 

As you guessed, a material will not always use a single value for a material component (specular, roughness etc...) nor will it always use a texture or whatever. The diffuse will not always use only one texture either, you'll maybe sometimes want a complex combo of multiple texture with custom shader code. Think of a water material where you can have multiple normal maps translating over each other for example.

 

In my project I handled it in the same way that UE4 handle their material only that I have no node graph editor, it is entirely code based (Just like you did with xml so far). So first of all, I define the parameters exposed by the material to the engine in a similar way as this :


<Material name="BasicPBS" type="0">
	<Parameters>
		<Parameter name="myDiffuseTexture1" type="Texture2D" ... />
		<Parameter name="myDiffuseTexture2" type="Texture2D" ... />
		<Parameter name="someConstantBuffer" type="float" ... />
	</Parameters>
</Material>

It is then known to the engine what every material expect as input which make things much easier (especially in DX12 where you have to create signatures that match your shader inputs etc...). It is also easy to dynamically generate the proper shader code for the input constant buffers with this information when compiling your shaders.

 


struct VertexOutput 
{
	#include <VERTEX_OUTPUT_LAYOUT>
};

In my case it is a simple #include that is replaced by the engine at compile time based on the parameters information in the XML file.

 

Obviously materials all have one thing in common, they will all need to be aware of the camera projection and few other thing. So those constant buffers are assumed in the engine and its not necessary to specify them in the material code. You are free to add as many information about your parameters as you want, this is just a basic example.

 

Here's the important part : the way I define the material components is a bit different from your. Instead of having only the option to specify a texture or a single value for the diffuse/normal/specular/roughness, it is instead shader code. UE4 also does this, except its a node graph editor, but you can do any shader operation within your materials. So my diffuse tag could for example look like this :


<diffuse>
	float someVar = whatever;
	...
	some more hlsl code
	...
	output.Albedo = Sample(myDiffuseTexture1, someSampler, input.TextureCoodinates) * Sample(myDiffuseTexture2, someSampler, input.TextureCoodinates);
</diffuse>

 

You are free to create your own intermediate language if you want this to be compatible with both HLSL and GLSL but that is a lot of work. Another way would be to provide both version of the code but that is harder to maintain.

 

So at this point you would have everything you need to deal with it in your engine. Now when you compile your material into a pixel shader, it's easy for example to have a "template" shader that insert the shader code contained in your XML at the right place. Here's how that said template could look like :

 


//--------------------------------------------------------------------
//	Defines the material structure.
//--------------------------------------------------------------------
struct MaterialData
{
	float3	Albedo;
	float	Specular;
	float3	Normal;
	float	Roughness;
	float3	Emissive;
	float	OpacityMask;
};

//--------------------------------------------------------------------
// Entry point of the pixel shader.
//--------------------------------------------------------------------
PixelOutput PS(VertexOutput input)
{
	PixelOutput PIXEL_OUTPUT;
  
	MaterialData output;

	// generated code.
	#include <PIXEL_BODY>

	// check for opacity and discard the pixel if needed.
	if (output.OpacityMask < 1.0f)
	{
		discard;
	}

	// assign the render targets.
	COLOR_RENDER_TARGET = float4(output.Albedo, 1.0f);
	NORMAL_RENDER_TARGET = output.Normal;
	return PIXEL_OUTPUT;
}

In that template, the #include <PIXEL_BODY> is replaced by the shader code of the material components.

And there we go, our materials can now be anything we want. Obviously we are forced to respect a standard in our XML file, for example assuming that we output to a variable named "output" which have a member called "Albedo". However this is the basic idea. I think this concept is very widely used and you can extend is as much as you want. For example, I omitted the part where the render targets are entirely customizable as well. My materials are also not only pixel shader based, you are allowed to entirely modify what happen in the vertex shader part as well.

 

If you have more question don't hesitate, I omitted a lot of information because it's hard to nail everything without writing an entire book about it.

Okay that makes sence for the shader production side, however what is the method to actually pass the data to the correct parameter slots in the generated shader? old directx's like xna let you set parameters based on the name,im pretty sure gl does as well. But dx11 + use constant buffers and gl can use uniform structs. With all the different permutations how can the cpp side store the data and put it in a struct to pass it without it being hard coded? or is there a way to set each parameter individually, although that seems a bit less efficient than passing them all in one go.

Thanks for the help so far :D

56 minutes ago, Jemme said:

With all the different permutations how can the cpp side store the data and put it in a struct to pass it without it being hard coded? or is there a way to set each parameter individually, although that seems a bit less efficient than passing them all in one go.

The trick is that you could create runtime those "structs" that are used to update a constrant buffer. If you think about it, the hard coded structs used for this are really simple things, aligned pieces a continous memory, where each field is in reality an offset to the beginning address of that piece of memory. Knowing this, and knowing how the constant buffer layout of the shader looks, plus the field names, you could do the whole thing runtime. Basically you allocate enough memory to hold the CB, then based on the order of the fields and their types, you could calculate the offset for each one, and store it in a list with their name. When you want to change a value in the buffer, you look up it's name, add the offset to the beginning address of the memory chunk, and then write the value into the buffer at that starting position. You then "upload" this buffer to the GPU CB, and thats about it. What you have to be careful about is the alignment requirements, and if you use a tool to generate your shaders and their CBs, then that has to create a CB layout that respects the packing rules for CBs.

Inserting idea, so allocate a block of bytes and set the byte data based on the values required, i never thought about it that way. So if it for example states Matrix4D i could calculate it requiring 64 bytes and know based off the other data its offset into the byte chunk. I already have a binary IO class that handles byte data, so i could just take stuff form that for the calculations. The structs can be simply labeled UserData a bit like physx does for keeping a reference to your entities,and materials will usually have only 1 proprieties structure so you can upload it to a single explicitly labeled slot that comes after the PerRender and PerObject data.

I will tell you how I managed it on my side, it's up to you to do exactly like it or to improve on it according to your needs.

First, the easy part. 

Let's start with the common parameters that are not defined in the material file and that all the materials will be using (Camera projection, List of light, Engine tick, etc...) are handled in this way :

  • In the code, one or many struct that match the shader equivalent are defined.
  • One or many resources (constant buffer) are created at the initialization of the Camera / Scene / Renderer with a size that match those struct.
    • At the camera level for the projection/view/inverse matrix etc...
    • At the scene level for the list of light
  • If dirty, the data are uploaded only once at the beginning of each frame. A simple memcpy of the struct is needed here.
  • The same constant buffer resource is reused for every material shader and is always bound to the same slot (I reserve the first few slots for those common buffers)

Secondly, the custom stuff.

  • A base abstract class Is used to do all the common stuff (Reading the XML, passing the generated shader code to the template's include and much more)
  • Each material inherit from that class
  • All the custom parameters (those defined in a "Parameters" xml tag) are defined as one big constant buffer in the shader while being careful to respect the 16 bytes alignment rule (The actual shader code of that is generated dynamically by the base class when compiling the shader). The texture parameters are defined as a fixed size array in the shader (The number of bound textures is known from the xml file).
  • All the custom parameters are also defined as one big struct in the code at the material level (omitting stuff that doesn't belong to a constant buffer like textures of course). Yes I know that forces you to maintain both the struct and your material file :( But at least your constant buffers are now strongly typed in the code.
  • Custom parameters are allowed to have a default value defined in the XML which is assigned to the struct and uploaded only once at the material initialization. Textures can also have a default value but there is nothing to upload here, they are only identified and will only be bound to the shader when it's about to be used. Having a default value on a texture parameter is basically the UE4 equivalent of setting the texture directly in the material while editing and not exposing it as a parameter.
  • The base material class has a "Bind" function which is only used to be notified when our material is about to be used. I call this to make my material active before rendering a mesh. At that moment I bind my shaders and all it's corresponding textures / constant buffers.
  • Each material has a SetParameters function that accept the struct that you defined earlier which match the shader constant buffers. Calling this will simply memcpy the entire struct data and upload it directly to the constant buffer resource (both match in size).
  • For real custom parameters that doesn't rely on a default value, there is no magic, you will have to call "SetParameters" with your new data when you detect that the material is about to be used. Try to call that only when there is real changes in the data. If you don't wish to have all that hardcoded this is where you have to start implementing some kind of scripting system and another nightmare begins

I would recommend uploading your constant buffers as one big block instead of every member individually as it's usually less expensive to do it all in one call. I also lost the link but I believe I've read in many place that it's generally not a good idea to partially update a constant buffer. XNA probably did this under the hood as well in the form of letting you alter the struct on the CPU side as much as you want and uploading the constant buffer as a big block only when the shader is about to be used if the data are dirty.

 

By the way since shader switches are expensive I only create a different material when the definition of the material actually changes. For example I don't create a different material for brick, wood, concrete if I know that they all perform exactly the same shader code and has the same layout in general. It's better to just bind a new diffuse/normal texture and upload new parameters between the draw call to avoid switching shaders. So my materials are not actually named after what they look like (Brick, concrete, wood) but more after their functionalities (Basic bump mapped material, Bump mapped material with specular and roughness map, Simple material with static specular and roughness)

Yeah i am fully aware of the inefficiencies of the per parameter update, i want to use constant buffers and uniform buffers which is where this question originated, i have always used constant buffers in DX11 and recently in opengl used the uniform structs. Just never for a full blown system, for example all my entities in the entity component system jsut used a defualt material but its now time to overall it into something actually dynamic. Unity's style of system is the use case id like so nearly all materials will use a "permutation" of a standard shader and just pass there individual settings in like what you said, avoiding the need for "material types" like wood etc.

So from here there are too possibilities:
Dynamic chunk allocation with no additional material derived from the base for the types

or

Derive from the base material with the hard coded structs like you suggested, a bit similar to Ogre 2.0 HLMS system.

Thanks everyone, its given me a lot to consider with how the system will progress :)

Glad you found the information useful. Give this thread some more time tho, I am far from being one of the big brain of this forum so I'm sure you'll get some clever inputs soon.

Yeah sure, any further ideas are welcome, unity works in a similar way with there "shader" defining properties and some in between code which compiles into variants like glsl and hlsl.


https://unity3d.com/learn/tutorials/topics/graphics/gentle-introduction-shaders

I like how they describe properties in a way that makes them seem like they don't know how it works:

"The properties of your shader are somehow equivalent to the public fields in a C# script"

There system seems to be dynamic because you can write a new shader with a variety of parameters like you suggested, it can be loaded by the inspector, that bit is easy if anyone needs information on that check Mike McShaffry Game Coding Complete page 790 i did it in wpf probably works similar with other UI libraries.

It does seem like unity use per parameter setting:

https://docs.unity3d.com/ScriptReference/Material.SetInt.html

Which isnt great, it also means the Material class must store a list of all possible types list a list of ints, floats, vec4 etc so that ic an handle any of the parameter types it has to set. im definitely going towards the hard coded materiel or the block allocation method.

Unless they keep the MaterialFile as a resource and just use the Material class as a pass through to the shader but still its not as good as submitting a block.

This topic is closed to new replies.

Advertisement