Robust Shader Systems for a Game Engine (Advice&Discussion)

Started by
7 comments, last by ray_intellect 11 years, 3 months ago

I am looking for some advice and to start a discussion on how to implement robust shader systems in my engine.

Right now I have shaders running, but it extremely hard coded I am looking for a way to make them not hard coded, and in this case let my game play programming write the shaders and have them work with the various objects in the game world, without having to dive into the engine code.

In terms of structure right now I handle things like this:

XQUv8gv.png\

That should give a general idea of how I am setup, the problem is that inside my model class I have all the parameters that get passed into the shader hard coded.

In addition I am working on porting stuff to an actual Component Based system, rather than just having my root GameObject called Component.

Advertisement

What members/methods do Camera and Model share? o_O

how to implement robust shader systems in my engine

I suggest iterative design. I've spent quite a few months in actually designing my shader system on paper. Wasted effort. You cannot think a solution for a problem you don't have, much less understand.

What do you need for your actual project? Make this work. Nothing else. Then iterate. Hopefully by that time you'll have a better understanding of your needs and perhaps better machinery supporting you.

Anyway. The key point is strings (if you want to mess up with uniform values) or opaque blobs to load in device registers (D3D9 slang) or uniform buffers (D3D10/GL slang). Those blobs come from the shader itself, the material or the specific object. I got quite some mileage out of this.

Previously "Krohm"

NOTE: below are just some experiences and what I've found works for me; no real professional experience, I'm afraid - I'm doing this for the first time as well :)

I'm in a very similar situation right now, trying to work out an initial shader system that would be externally programmable, would be forward and backwards compatible (or at least open to extensibility) and would allow some form of "modularity" (eg adding planar tessellation to any flat surface shader at will). I strongly support what Krohm said about an iterative approach, though as I, too, have found myself going back to the drawing board when trying to sketch out too much at a time. Here are a few things I've found make things easier for me, though:

1) my shaders are written in an easy to convert "pseudo-markup" that enforces fixed notation: a preprocessor converts intermediate strings into GLSL strings based on what version is available. The shader cannot set its own version at all. Turns out, if functionality doesn't get in the way (eg you don't need functions specific to a particular version), the conversion between GL2.0 and 4.2 is relatively trivial. An important part of this is assuming that:

2) all input and output bindings are determined by the engine. You can't write to an equivalent of gl_FragData[0]; you can, however, write to gd_OutFragDiffuse. This standardizes mapping across all shaders. Granted, this isn't particularly flexible, although it could be made programmable as well, which would in turn cause it to become more complex to manage.

3) "uniform blobs" as Krohm called them: my engine uses UBO's when available, stashing an entire state to the shader at once or if UBOs are not available, reverts to individual uniforms. I ended up writing a shader variable encapsulation class to automate this, though, as I want it to be programmable/extensible on the application end as well. Right now I only have a single transform state block (matrices, near/far planes, etc) that are handled as one chunk. This chuck is inserted into every shader as a UBO or only selectively based on what functionality the shader needs (determined by a simple preprocessor scan stage). It's up to the shader to know which inputs and outputs are bound. The trick is optimizing how this chunk is updated - it may need to be updated every draw call or it may not change throughout the entire frame. I haven't done this yet, but since the block is often-times unique to a shader, each block will obtain its own "state ID". The engine transform state has an ID as well, which is incremented each time it becomes dirty and the application performs a draw call: if the states don't match, the varblock's members are updated.

Next up is to add material varblocks, which follow a similar strict notational style and work out how to permute shaders to minimize shader count and branching. I'm considering writing a graph system for it to see which combinations are required, then using weighting to determine how much branching is required (eg if something like 2-3% of textures use an opacity map, then it doesn't make sense to add opacity as a branch, which would affect every other texture as well; if 50% do, though, then it would make more sense to have the branch) and then batch-compiling them.

The one thing I haven't quite figured out is 3D texture coordinates as all my in-engine streams only use U and V components. I'm overlooking that for now, though as I'm not using 3D textures anywhere.

What members/methods do Camera and Model share? o_O

Position, Rotation, etc, but like I said I am also moving to a more component based design away from the system I have now. Component is a bloated class right now.

how to implement robust shader systems in my engine

I suggest iterative design. I've spent quite a few months in actually designing my shader system on paper. Wasted effort. You cannot think a solution for a problem you don't have, much less understand.

What do you need for your actual project? Make this work. Nothing else. Then iterate. Hopefully by that time you'll have a better understanding of your needs and perhaps better machinery supporting you.

Anyway. The key point is strings (if you want to mess up with uniform values) or opaque blobs to load in device registers (D3D9 slang) or uniform buffers (D3D10/GL slang). Those blobs come from the shader itself, the material or the specific object. I got quite some mileage out of this.

Normally I would 100% agree, but since this is a school project, where the engine design is what matters, not the end result of the game, I need to consider these things.

I'd recommend taking a multi-layered approach. First, decouple your constants from your shaders. Chances are most of them share a number of constants. Be consistent in your naming.

Next, look at a rapid prototyping approach. There was a pretty good article by promit on this a while ago, but the link appears to be dead. The idea is to scan a shader for constants and link it up to a provider class. In my implementation, I search a class for public properties (C#, though you can do this in any language), and link it with the associated constant in the shader file. This frees you from a lot of boiler plate code and lets you spend time tweaking a shader as you see fit.

Once shaders are finalized, you can write concrete classes that avoid the reflection and other overhead with the rapid prototype approach.

One concept that I think is particularly useful is thinking about uniform/attribute/texture variable usages. Since the shader writer knows what the semantic usage of each input variable is, they can tag each variable with a usage enum value which specifies the semantic purpose of that variable (vertex position, normal, color, diffuse map, environment map, shadow map, model-view matrix, etc...). This could be done in some XML shader metadata format or something, accompanying the shader source.

At render time, the renderer looks at the input variables for a shader. If any of these inputs are not set (with constant uniform/texture values), the renderer can choose to provide the information needed from the current scene rendering state. For instance, tagging a shader variable as the model-view matrix tells the renderer that it needs to provide the proper transformation matrix for the current transformation stack being currently rendered. This prevents you from having to define certain hard-coded variable names as representing those semantic usages (ugh, fragile!). The shader's metadata provides this information, linking variable usages to scene data.

This technique can also be used to bind shaders to meshes/textures that are not otherwise related. The mesh specifies that it has a buffer of vertex positions, buffer of UVs, colors, etc, each with a usage enum. At render time, the renderer matches up the vertex buffers with the correct semantically-tagged shader attribute variables for those usages.

In addition, each usage enum object also contains an index value, indicating that it refers to the i'th usage of that type. This can be done to allow things like multiple lights, multiple texture coordinates, or anything else that can be enumerated (probably not model-view matrices!). A shader for 2 point light sources would provide input variables for the usages (light position - 0, 1) and (light color - 0, 1). The renderer (which knows about the light sources in view) then can find the two most important lights and provide them to the shader automatically!

What members/methods do Camera and Model share? o_O

Orientation (rotation, scale, and position) and a child/parenting system.
I am not the original poster, but he mentioned the orientation part as well, and he is 100% correct.
If you have one, the same child/parenting system would also be shared by these 2 objects.


Here is one common implementation:
CEntity

  • Has a COrientation

CActor : public CEntity

  • Has a list of children and a single parent, all of type CActor *.

C3dModelInstance : public CActor
CCamera : public CActor

CEntity and CActor can be merged into one class (often done) as well.

I see a lot of people making camera classes where the position and rotation are directly part of the camera class itself.

As if the camera class exists in the game world magically different from the way everything else exists in the game world. This is a mistake and the topic poster is 100% correct as far as the high-level view of his design in this part is concerned. With proper design, the models and cameras should share the same low-level features as every single other object in the scene (except certain types of particles).

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I haven't tried to write one in several years. One method I remember was to have the shaders use the same names for variables like the world matrix eg.

Mat44 worldMatrix;

and to have all the other shaders use the same variable names. Then you can create a common set of parameters that you can upload to the shaders e.g world, view and projection matrices, textures including diffuse, bump, AO, normal, and an array of light positions, depending on the type of render you want to implement. I used to use a scene graph

SceneNode
- ReferenceFrame

SceneObject : public SceneNode
- Geometry
- Material List
each material has a shader, so there is a parameter set in the material class for each shader type

Then when I render I have the object subsets for each material type sorted so that I set a shader and draw all geometry with that shader at the same time, this reduces the number of state changes. I also recommend sorting by texture.

now I suppose you want to create custom parameters, so for example if you did not want to hard code a specular power variable, there would be more than one way of implementing this. I would probably unify the material creation and shader creation activity. So a custom material would have a shader predefined, and then some kind of meta data file (perhaps xml) with extra properties, the type of property (i,e. float, integer, matrix, texture etc) and the associated name of the resource or associated value of the variable. Then you might need some kind of transformed light vector (i.e. light vector in view space), for these I would just add a rigorous set of methods to the light object so that you can obtain the transformed light vector needed, and then use a common naming convention (i.e. lightViewInv) for defining the transformation required.

So anything that depends on other objects in the world (lights, reflective surfaces) would have predefined accessors, and anything like a custom variable would have a meta data object with the name of the thing in the shader and the actual resource name, so you could have a list of meta data objects and simply run them through a loop to retrieve the information from the engine.

This topic is closed to new replies.

Advertisement