Sign in to follow this  
AaronMK

Shading graphs and how they map to implementation in a game engine.

Recommended Posts

I have seen that many rendering packages expose shading graphs to artist. I hope I am using the term right, but I am referring to the graphs that typically have a shader as a node, with inputs and outputs of the shader that can be chained to create a final effect.

How are these integrated into a game engine in practice? How do they map to implement ion of the actual shaders compiled and run? It seems like these could get complex very quickly, with either the number of shaders being compiled to support all of the permutations artists might create going through the roof, or a lot of CPU/GPU ping-pong'ing and synching waiting for the output of one stage of the chain being ready for the next stage.

Thanks for any insight.

Share this post


Link to post
Share on other sites

When a compiler compiles a high-level language like C or like HLSL, it usually creates a tree-like structure of operations and their inputs that would look remarkably similar to the graphs you're talking about. In a sense, these graph-based languages are actually just duping the "programmer" to doing part of its job. For big programs, graph-based programming typically becomes unwieldy rather quickly, and mixes poorly with stateful execution. However, shader programs typically perform a very focused function, with a limited set of (conceptually) immutable state -- this makes them very suitable for graph-based programming. Especially combined with the opportunity for real-time, visual feedback at each node, which makes experimentation and discovery intuitive, even for those who are not classical programmers.

 

Once you have a graph, either assembled by a human or by a compiler, the task is simply to translate the nodes and connections into a series of linear instructions. This is easy because the graph defines the dependencies clearly, which leads to the order in which the operations must be performed. This is all compiled and run on the GPU as a shader, with no back-and-forth between the CPU and GPU between stages.

 

Often, graph-based languages offer high-level nodes which represent common graph patterns -- say a Phong shading node -- really, these fancy nodes are just like function calls in an ordinary, text-based language. They're a vehicle to manage complexity and leverage code-reuse.

 

Modularity and linking has traditionally been a problem with shaders, and over the years people have created bespoke macro-systems to create permutations, generated shader code at runtime, and other approaches. D3D 11.2 will offer dynamic linking of shader code to address this same problem -- lack of such linking was an inconvenience before, but it becomes an outright necessity as shader complexity and hardware capability grows.

Share this post


Link to post
Share on other sites

Different nodes in graph correspond to GPU computation via shaders.

It works like in post-process when you have done some effect having put result into surface of render target texture.

Then you make it  be input for the next pass with other effect.

Same way in a graph when the primary nodes have been done the next nodes can be passed. And after this process repeats.

Output from the prev node is input for the next node. It will be textures for example.

Edited by DigiHunter

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this