Managing Materials / Shaders, Their Inputs, and Different Types of Geometry.

Started by
4 comments, last by Krohm 10 years, 7 months ago

I wanted to have this great "material" system where I could create a simple description of a material in a text file and slap it on just about any geometry I wanted. Different types of geometry are things like static meshes, animated meshes, particles, instanced geometry, ribbon-trails, etc.

These different types of geometry require different vertex shaders, maybe tessellation or geometry shaders as well, and have inputs to go with those shaders such as bone orientations for animated meshes.

So in my head it made sense to create a Geometry class who's purpose is to contribute the vertex processing part of the shaders, handle the vertex processing inputs, and control how the primitives were submitted to the GPU.

Then to go with it I created a Shading class who supplied the fragment processing part of the shaders, the inputs for the fragment processing, and controlled the order in which "bucket" the draw call should go in. (Transparent, Forward Opaque, Deferred, Glowing, Distortion, etc).

There were agreed upon inputs/outputs between shading stages. (Normal, Position, TextureCoordinate) etc.

This turned out horribly, or at least the way I handled it did.

  • You couldn't have shading that required special vertex shading. Like this.
  • You had to write special shaders for instanced stuff anyway if they were to have any variation in shading. Thus defeating the purpose.
  • Particles were a mess because you have the behavior of a particle influence both the vertex and fragment processing ends. And ultimately defeated the purpose of separating geometry and shading.
  • Forward lighting became problematic because of the multiple shaders for different types of lights while other types of shading like deferred didn't need to know about the light.

So the system is pretty much broken and I need to replace it with a better way of doing things. So how do you handle minimizing your shader rewriting? How do you pair your "materials" with your "geometry"?

I tried to search around for related information but I wasn't entirely sure what to search for so if I've missed a great thread post me a link.

Advertisement
The 2 most common methods are stitching and permutating.

Stitching
The shader is broken into sections each in a separate file.
Each file has only one task for the shader, such as Blinn lighting or Oren-Nayar lighting.
Based on flags passed to the shader manager, a final shader is created by loading each necessary file and stitching them together where needed.


Permutating
A single file holds all code with #ifdef’s to allow parts to be included or excluded.
The same flags as passed but instead of deciding which files to load they are simply used to create a list of #define’s at the top of the file. The preprocessor will then strip areas away that are not needed for that version of the shader.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

one nice way to solve that nowadays is to use static branching. you write your code as if you'd have actually branches, but those are removed by drivers. e.g. for your forward shader you'd do something like

for(..;..<STATIC_point_light_count;...)
    Intensity+=ShadeByPointLight(...);
 
for(..;..<STATIC_spot_light_count;...)
    Intensity+=ShadeBySpotLight(...);
 
for(..;..<STATIC_area_light_count;...)
    Intensity+=ShadeByAreaLight(...);

and let the compiler deal with it the best way.

it might cache some shader for common permutations, it might just go with dynamic branching if it doesn't matter etc.

this can be slower as dynamic branching shaders tend to use more registers and store some maybe not needed temporals, but sometimes it can be also faster if it saves the driver from switching between 1000permutations and just use one wtih a coherent dynamic branch.

I'm afraid this is the only good reason for which deferred shading currently makes sense: it's the really only viable alternative.

FYI, I've got a bit further than you: I could have some instancing and some automatic light management. Unfortunately, my renderer being still on D3D9 ran out of temps uniforms really fast and it turned out to be very brittle. Even if those issues could be solved by moving to Shader Profile 4 I'm having trouble in considering it robust enough for general use.

From my understanding of your Geometry class, I'd say I didn't have anything like it, albeit in last iteration I had a thing I dubbed "Vertex Value Affector stage" which sure needs quite some more work by my side.

So as a last statement, I would say to give up and go for the above noted methods.

edit: my fault really

Previously "Krohm"

Unfortunately, my renderer being still on D3D9 ran out of temps really fast and it turned out to be very brittle.

shader model 3 should allow you to use like 32temp registers (in contrast to like 12 for SM2), maybe that helps ;)

Sorry I meant uniforms.

Previously "Krohm"

This topic is closed to new replies.

Advertisement