Not from a programming language/architectural point of view, but from a theoretical - if it just automagically existed, what would it do...
I don't have time to dig up the exact references, but I remember reading that someone suggested that complex renderman shaders (arguably where we want real-time shaders to be) could be broken down into a lot of relatively simple passes... and the final renderman shader executed as a series of 10...20...50...N passes.
That's a nice idea, but in that context probably isn't real-time. Yet.
I do also remember reading that John Carmack cited the previous talk as, in retrospect, a defining statement:
There was an important paper that came out at SIGGRAPH a few years ago by someone at SGI.
He presented one real-time renderer and he presented something that showed the decomposition of Renderman shaders into multi-pass stuff that required floating-point and pixel stuff.
It was amusing because I remember people completely discounting that paper, which I think is going to be looked back at as one of the most seminal things in interactive graphics.
People were saying the Renderman shader was ridiculous - it took 500 passes to do this simple shader. People just hit this number - 500 passes, and clicked it out of their brain as not relevant.
But a pixel in Doom 3 may have 80 textures combined on to it.
- One of the many sources for this quote
Modern interactive graphics on consumer hardware is going in that direction - in one form or another a lot of engines perform a lot of passes to build up an image. It might not be the 20...50 (etc..) passes required for a renderman shader, but it's showing signs of going in that direction.
So, there are two areas that multipass seems to revolve around - at least, as far as my conceptual idea goes.
- Compose the intitial scene rendering
This seems to break down on the number of lights used in a scene. It is possible with shaders to perform multiple lighting calculations per-pass, but that's more of an implementation detail than the theory. For each light we calculate it's contribution to the scene - especially with shadow information. I see shadows as one of the important parts of a modern interactive graphics engine.
- Perform post processing effects
Now this stage could be optional in some respects, but for my conceptual graphics engine i'd have to demand a high dynamic range post-processing. Maybe not necessarily the overdone "new lens flare" bloom effects, but definitely a higher dynamic range of colours adjusted to a correct exposure/balance. With the power available to us in consume hardware, I see this as a basic forward looking feature. HDRI rendering, both "true" and "fake" implementations work on the basis of multiple passes.
So you'd end up with a set of initial passes that would composite an HDRI image together. The ability to store HDR values should be a very important concept for the multi-pass additive lighting (which I used in my D3D9 article here). The engine would then take this HDR image and split out the LDR and HDR values, perform the necessary filters on them, measure luminance and determine the adjustments required. Composite it all backtogether to give you a final image.
Each of the involved steps is relatively simple, but the result as a whole would be phenomenal. Such is the power of a multi-pass approach.
I've run out of time tonight, but if anyone wants me to cover the actual rendering process with a couple of diagrams send me a private message or leave a comment below [grin]
I hope that was of interest!