Node based shader graph system for a deferred renderer

Started by
7 comments, last by nekitusan 10 years, 3 months ago
I am wondering how a shader/material system based on blocks would work in a deferred renderer. Since in deferred rendering we can use just a shader to render the screen quad, I see it as an ubershader automatically generated from all the custom shaders the user creates and based on the current pixel material id you choose the right shader function to take over the lighting. Thing is, if you have lets say 100 custom shaders created in the node based shader editor, having a switch in the main shader function to choose the right entrypoint might be a little too much. I have seen unreal4 does some node based materials, so what do you guys think about this topic?
Advertisement

I'm not sure what you're asking really, a node based shader graph is just a way to abstract the authoring of a shader. You don't execute the nodes, you execute the shader that is generated from the nodes in your graph.

For example in your post you say:


Since in deferred rendering we can use just a shader to render the screen quad.

Using a shader graph is exactly the same, only the shader was generated by the graph rather than hand-written.
n!
yes you just said what I've said. I was wondering about what happens when you have hundreds of custom shaders functions compiled in that ubershader and called using branching in the main shader func. Are there any ways to split this into some sort of multipass deferred rendering approach? and what are your thoughts about unreal4 node based mtl system, I'm curious about implementation details.

No, I didn't say what you've just said.

Shader graphs do not (in general, they can if that's how you implement it) spit out an uber-shader. You have different graphs for different things, and so you have different shaders.

I'm also not understanding about your query on deferred shading, they can be used fine for any rendering approach. And you don't really use 'multi-pass' in deferred rendering so I'm not sure what you're querying there sorry, perhaps you mean something else?

As for the U4 shading system, it's nice but they have a lot of experienced developers working on it. The best you can do is look through their documentation, they aren't doing anything particularly special there (which I don't mean any offence to U4. I mean, it's just a [nice] shader graph!). Perhaps if you ask something specific you're interested in?

As for implementation details, it's quite a big system for such a broad query. You would have a 'graph node' object with inputs and outputs, and the ability to connect them together. Then you run through the graph and say 'Ahh, this node warps the xy coords' and spit out the code for doing that. You'd also need code to manage the variables used in the shader, and inputs (such as textures) etc. This would always be done in the editor, with the final game just having the final generated shaders.

n!

I was saying about the graph that generates an ubershader from its nodes at authoring time of course, based on shader code snippets or blocks. What I was saying, is, when you for example have 100 shader graphs and each is exposing several blocks connected in various ways with various constant values and data, you need to generate an ubershader, be it as one source file or more that gets included, but at 100 combinations, there will be 100 functions that are automatical
y generated to call the block's shader code. So the end code will be big and to choose for each pixel based on the material id you need to do a switch of 100 cases so you can choose the right entrypoint to execute for that pixel.

Example:

float3 phong(....)
{....}

float fresnel().....

float4 mtl001_main_ps()
{ float3 c = phon(...)...... fresnel()..... return color;}

float4 mtl002_main_ps()
{ float3 c = phon(...)...... fresnel()....some other operations.... return color;}

...... and lets say 98 more shader graph auto generated functions

float4 main_ps()
{
int mtl_id = getMaterialId(IN.texcoord);
switch (mtl_id)
{
case 1: return mtl001_main_ps(IN);
case 2: return mtl002_main_ps(IN);
....... and the next 98 shader func calls....


so, how one can handle a lot of shader funcs without having that huge switch in the shader.

Hiya,

With a shader node based system, you would not typically spit out one shader with all permutations but would generate one shader per graph.

Let's try as an example, a two basic shader graphs. One outputs a colored surface, the second outputs a texture surface.

Graph one looks like (in a pseudo representation)

VariableNode( "color", inputType:color)

OutputNode( color: "color" )

Graph two looks like:

TextureNode( type:2d, "diffuseTexture" )

VertexNode( "texCoords", inputType:vec2 )

OutputNode( color, { "diffuseTexture", "texCoords" } )

So, here, you will be generated two shaders. Each with a single vertex and pixel shader. It is then up-to you to provide a way of mapping the appropriate inputs, much as you must do with a standard shading system.

So, each shader does not carrry multiple permutations inside. Varying output is gained from changing the variables, not by selecting a different sub-shader. All shader graphs only have one logical path, it is just the variables that change.

In your last post, you wouldn't have a mtl001_main_ps(), mtl002_main_ps you would have two different shaders. And the material just specifies which shader to use.

n!

Ok, you have to understand that I need to have just one single ubershader because I am doing deferred over a quad, I dont need many shaders, you cant have more than one shader when you are doing deferred, only multiple entry point functions called by the actual main shader function, for each material id encountered, you can have similar setup for various lighting formulas, call the lighting function based on your current material id found at that pixel (texel in our case, since we're drawing a screen quad). In a node based "shader" code generation system you can have a plethora of "shaders" with various operations inside them, and call the right function for a specific material id, but there is only one hardware shader which has all this code in it, what are you talking about its not deferred stuff, or from what I understand you're talking about a more fixed setup where the code is the same but only material params are different from pixel to pixel, well, I'm not talking about that approach, I want to have total freedom on writing the pixel shader for each material id, and since we're just using one hardware shader to lit/process that screen quad, I am trying to find a way to have "soft" shader code called by material id, many generated shader functions inside the big ubershader, anyway I've figured it out now, it seems Unreal4 is doing as I was saying (from a whitepaper I was reading now), and they have the limitation on branching to the right shading function when the number of generated (by a material graph) shading functions is high and also when the bound texture count is high (for when they do layered materials), fyi read this http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf , its interesting.

You do not need a single uber-shader for deferred rendering, and yes I'm already aware of the siggraph presentation you link.

With a deferred renderer you will have multiple shaders, perhaps if you point out the bit in the document you think contradicts this?

You will have various shaders for your geometry rendering and typically a shader for each light type you require, which will use the gbuffer to retrieve its parameters. Whereas the output from the shaders used by your geometry will be the correct values in the gbuffer for the lighting of that pixel.

If you are describing their layering material, that is a single shader that links two material graphs and interpolates between them based on a variable.

(edit) - Some implementations *will* have a material Id (in the gbuffer) as you say, however the permutations of this value will be very very small, usually as few as two, metal and non-metal or perhaps with translucency. There will never be a large amount of permutations as you seem to be suggesting as your gbuffer should contain most of the information necessary to achieve the different surface lighting results, and all your shaders do not get combined into one.

n!

When you are composing the image based on your various gbuffer components, you have one single shader, that shader would have those subshaders for various materials. Of course you have multiple shaders to prepare your gbuffer or other render targets for lights and such, but for the final composition procedure you have a single shader. I was suggesting taking that shader with 2-3 lighting solutions and material id to something auto generated from a node based graph, usually nowadays you have several lighting options using that material id, which are hardcoded in the shader code, what I am saying is one could generate the code from a graph, to have totally custom lighting solutions based on a user created graph (with simple ops, mul, add, div, uv rotator, etc.). This is done easily when you have forward rendering since you can set a shader per object and render it with that custom generated shader, but in deferred you can only use one single shader to compose that scene from the gbuffer, so, you need to squeeze all of the many custom shaders code into one hardware shader to be used on the screen quad used to compose the scene. Anyway, after some more thinking I got to a conclusion, its interesting to use this approach, which is almost like a RenderMan solution, full control over your shader code from a graph, in deferred, but one might be slowed down when choosing the right shader path based on material id, since you need to do that branching based on the current pixel mat id. I'll get on with it and if I remember this topic, will post some screens and post mortem :), thanks for the brainstorming.

This topic is closed to new replies.

Advertisement