Ideas on shader system

Started by
4 comments, last by obhi 13 years, 8 months ago
Most game engines allow shader variables to pass data from renderable objects to the renderer. Given all uniform or persistent data that are needed by a particular shader can be grouped using constant buffers and texture buffers, what will be the best approach for setting these variables automatically, such that if a shader is changed there is zero code change needed to set up the variables again. What will be the best approach for mapping these variables.
The problems are the following:
1. Shader variables can be set (and hence constant buffers can be updated) at different levels , per frame basis, per object, and if possible, some variables can be made to presist over multiple frames.
2. The parameters can be stored in different types of buffers, constant or texture depending upon the rendering technique.

I wanted to address the first problem and tried to come up with a mapping technique that could be both efficient and robust. However, there are certain trade offs. Briefly, I will lay down my system:

ShaderParam class is meant to store a variable, this class is like a source of data for a certain type of variable. It can hold references (pointers) to other variables (non shader type), or have its own buffer.

ShaderInstance is a class which will store the list of ShaderParam's needed for the current shader for a particular Renderable, it will do the mapping. Some shader params will be obtained from the Renderable, others from the environment.

ShaderSystem will be the system where the ShaderInstance will be submitted for rendering, and prior to it during a Prepare call. This class will store some templates of constant buffers that the shaders will require, (obtaining the info from the ShaderInstance::Prepare() call, certain templates will already be registered by previous ShaderInstances. Only unique templates will be created and a mapping will be done with the templates to the Rendering API specific constant buffers. It will create a list of ShadeParam sources that will map to the constant buffer and store the lists in the ShaderInstances. This list will be called ShaderParamGroup During rendering it will take the shader instances and use the mapping to update the constant buffers. To further optimize it the shader params will store ticket values (state numbers) that will change as soon as they are updated. A constant buffer is updated only when this change is propagated to the ShaderParamGroup (by callbacks).

I have come up with this idea, but I could use some criticism and inputs.

Thanks for reading,
obhi

What if everyone had a restart button behind their head ;P
Advertisement
I use a roughly similar method, but with some differences. I have a class (ParameterManager) to house all of the parameters, which is basically a mapping between the textual name of an object used in shaders (buffer, texture, matrices, vectors, etc...) to the actual object on the application side. Any class that has information to set in this ParameterManager can do so, and the data is put into the data structure for use later on.

Then when it is time to render, my shader classes have read out from the shader reflection API all of the objects, their types, and their names required for usage. Whatever is needed gets loaded into the appropriate buffers and set into the pipeline for rendering.

Since the parameter manager is a class (and is not static) this also allows me to use different 'ecosystems' of parameters for different domains within a rendered scene. For instance, when a rendering pass for shadow maps is executing, it doesn't need parameters that were setup for a reflection map pass. Then you can do fancy stuff like making a default parameter system and allowing a 'parent' parameter manager to be linked to by child ones... The whole system is working out pretty good, and I think your description goes along teh same lines.
Quote:Original post by Jason Z
Since the parameter manager is a class (and is not static) this also allows me to use different 'ecosystems' of parameters for different domains within a rendered scene. For instance, when a rendering pass for shadow maps is executing, it doesn't need parameters that were setup for a reflection map pass. Then you can do fancy stuff like making a default parameter system and allowing a 'parent' parameter manager to be linked to by child ones... The whole system is working out pretty good, and I think your description goes along teh same lines.


Thats interesting. I like the domain concept, although I presumed that my ShaderInstances will explicitly specify the params needed, and the related constant buffers will be updated accordingly. I would definitely like to share the constant buffers across, if that is possible. The template(basically a named list of a pair of variable type and semantic) concept allows that so I will explore it a bit.

One thing I would like to know however is, how much automation is in this system. I have studied the RenderMonkey interface to come up with something that will require only scripts to set up the whole effect. The problem I am facing is with the SkinnedMeshes, the transformations has to be set individually in a certain type of resource (be it array of constant buffers, or texture buffer, or texture, or a simple buffer). This will require additional coding which right now seems the only way to acheive it. I wanted to decouple this but it seems ok anyway :)

Thanks for sharing :)
What if everyone had a restart button behind their head ;P
I do things a bit differently. I really didn't like handling all the mapping of shader variables to shader code, and found it made iterating through various ideas/prototypes cumbersome at best.

So I wrote a custom shader language in C++ using operator overloads. Then I could write the shader in C++, and directly access the appropriate variables in my C++ code without any explicit mappings.

For example this was a very simple and unoptimized vertex shader I was playing around with the other day (I chose it because it does compile and run, not because it does anything special).

VSOut VertexShader (VSIn in) {	VSOut out;	out.normal = in.normal;	out.texCoord = in.texCoord;	out.color = in.color;	out.clrBlend = in.clrBlend;	// get model and camera matrix	float4x4 mm(&modelMatrix);	float4x4 cvm(&camViewMatrix);	float4 pos = in.position;	pos = mul(mm,pos);	pos = mul(cvm,pos);	out.position = pos;	return out;	}


In the example above modelMatrix and camViewMatrix are matrices in my C++ code, and when I invoke the shader the run time copies their values for me. I can also embed constants directly into the shader code like:
float1 x(global_x_from_cpp_code);

and so the shader compiler can do things like loop unrolling or what-have you, but because they're copied from my C++ code, any time I recompile the shader, any changes to the variable are propogated to the shader code. Works well for things that don't change often.

I wouldn't suggest going through all the hassle of doing what I did, but I thought I'd mention it because it is a different way of looking at the problem. Its not as flexible as the system ur describing, but it is really easy to use, and near impossible to mess up (if it compiles your set).
Quote:Original post by obhi
Quote:Original post by Jason Z
Since the parameter manager is a class (and is not static) this also allows me to use different 'ecosystems' of parameters for different domains within a rendered scene. For instance, when a rendering pass for shadow maps is executing, it doesn't need parameters that were setup for a reflection map pass. Then you can do fancy stuff like making a default parameter system and allowing a 'parent' parameter manager to be linked to by child ones... The whole system is working out pretty good, and I think your description goes along teh same lines.


Thats interesting. I like the domain concept, although I presumed that my ShaderInstances will explicitly specify the params needed, and the related constant buffers will be updated accordingly. I would definitely like to share the constant buffers across, if that is possible. The template(basically a named list of a pair of variable type and semantic) concept allows that so I will explore it a bit.

One thing I would like to know however is, how much automation is in this system. I have studied the RenderMonkey interface to come up with something that will require only scripts to set up the whole effect. The problem I am facing is with the SkinnedMeshes, the transformations has to be set individually in a certain type of resource (be it array of constant buffers, or texture buffer, or texture, or a simple buffer). This will require additional coding which right now seems the only way to acheive it. I wanted to decouple this but it seems ok anyway :)

Thanks for sharing :)

The system is 100% automatic - when I load a shader, all of its parameters are read out and stored in a list. At bind time, all the parameters are read out and used for whatever they need to be. I should also point out that the cbuffers themselves are parameters too - they are named and their references are stored in the parameter manager, and so can be shared across multiple instances of shaders without any problems.

With respect to skinned meshes, I have a mechanism for adding parameters to an Entity3D (which is my 'Object' class for a scene). This allows for writing a function like 'LoadMeshWithAnimation' which can create an Entity3D, then create an animation controller and attach it to the Entity3D. The controller can then set the appropriate transform matrices in the parameter manager that need to be set in order to render the skinned mesh. The system allows this type of additional behavior from Entity3D and from a MaterialDX11 class as well, so any special case parameters are very neatly included by the rendering system with minimal additional code on the application side.

This parameter manager functionality was actually a part of my renderer up until a few weeks ago, when I split it up. It was done to facilitate multithreaded rendering with D3D11, but the changed design is significantly better than it was before - more clean, the renderer is not as huge, and object oriented handling of the parameters allows for re-entrant recursive traversals of the scene graph making handling of mirrors or something like that quite simple.

Thanks Jason and Rayan for your replys.

Quote:
With respect to skinned meshes, I have a mechanism for adding parameters to an Entity3D (which is my 'Object' class for a scene). This allows for writing a function like 'LoadMeshWithAnimation' which can create an Entity3D, then create an animation controller and attach it to the Entity3D. The controller can then set the appropriate transform matrices in the parameter manager that need to be set in order to render the skinned mesh. The system allows this type of additional behavior from Entity3D and from a MaterialDX11 class as well, so any special case parameters are very neatly included by the rendering system with minimal additional code on the application side.


I get your point. But I am taking a slightly different approach to maintain consistency of parameters no matter what technique is used for rendering (as I mentioned with SkinnedMeshes). Clearly your system does that, but I thought a lot and I am sticking to my concept. What I am doing is making ShaderParams heavily abstract with only a Get function, where a destination buffer is passed to copy the contents of the parameter to the buffer. This buffer is supposedly passed by an implementation of ShaderParamGroup which can be anything from a texture, simple buffer, constant buffer or texture buffer. The ShaderParamGroup is setup using semantics in the effect script. This is quite different apporach from the traditional methods (where ShaderParams are typed) but I guess this method is robust and conceptually clear (Considering GS,VS or PS will need only simple data types to work on). This also decouples the problem I mentioned.
What if everyone had a restart button behind their head ;P

This topic is closed to new replies.

Advertisement