I have read the original thread again, there were things I forgot. Have read my questions back then too, some of them still unanswered...
I totally agree that everything is possible, when the "shaders" implemented as plugings, can plug to many
points of the art pipeline, to the point of preprocessing geometry, even many shaders sharing same geometry pieces on disk (per-component, if they requirements
intersect), to change the rendering process and pipeline completely - to the point of interacting with HSR-process, adding new geometry to the scene (and for
physics), adding new properties/GUI/code to the editor. Then everything is possible.
I can't evaluate the complexity of such a system - never seen/hear of one (even in action), nor just barely thought to design such.
It is just way above my current design level... :(
But such a plugin system is far from shader-DLL terminology imo. This is clarification for all these people, who dive into this approach, didn't knowing
they make mini-engine-plugins, not shaders... ;)
And, by the way (I was off this forum for a long time lately) - have anybody implemented successfully such a
system, you described, and used it with success? I just don't know of any and I am very interested to hear that it was made by somebody else and works ok...
Where are all the fans, who participated in the famous "Material/Shader..." thread? Only python_regious is here.
I'd like to hear what they have done, it would be interesing approval of this design.
Now, my concerns, in summary:
Each system is born around some idea, and that idea limits the resulting system and gives it wings, at same time.
The idea of pluggable render-features, is to describe abstractly pieces of space, then resolve that describtion into set of registered interfaces, that can
create/render the thing.
On a primitive level, we can construct some shading techniques, and later combine them for more complex effects (lets forget for little, about grass/light-shafts).
So, my concerns:
1. We have fixed shaders/shading procedures -> abstract effect decomposition means more shaders, to the very low level having more ps/vs-s for execution of the effect.
Diffuse + Specular + Bump for example - if they are distinct effect, combining them you lead to more that 1 pass.
Of course we will have that into single shader - well... that's more work, to create all that combinations. And that's a thing that almost every current
engine tries/succeeds in avoiding.
The example is quite plain, but the idea is - building shaders from text, then compiling the result depending on what we want from the shader will get better ps/vs decomposition for example,
compared to the decomposition from already compiled shaders.
2. How shaders adopt to changing conditions - lights, fog, scene lighting (day <-> night can change lighting model a bit). Changing visual mode (night-vision, etc.)
I read the explanation, about the RPASS_PRELIGHT. But how that pass executes actual lighting, who pick-up the right set of shaders -
the same decomposition logic? But the lighting conditions can be vary a lot - from single shadowmapped directional + 3-4 points, to many point+spot lights and
some diffuse cbm-s at night.
Again - I see many passes or many work to implement that on pluggable system.
Text-composing system will handle that quite nicely and easy.
3. Next, how single piece of geometry (single piece - visually, not in memory) is rendered with multiple shaders - be it lighting conditions, user-specified parameters
like object being on-fire, damaged, transparent, etc. - i.e. gameplay changes requiring shader change.
How is the geometry used between shaders?
Because at heart this system relies on every single piece of pluggable rendering-feature (aka "shader") to prebuild its geometry for itself.
But geometry needs to be rendered by many shaders - how they share it, or they don't?
4. You said, that adding new functionality, like grass, light-shafts, water - is possible, right? How your plugins interact with physics and gameplay-side
of the application, what interface is there, to allow that?
Falling objects into the water, can produce waves round them - is that possible with that approach?
And how objects are managed (when far-away, to fade, to switch LOD, to be recreated when comes near the viewer, etc.)
I saw only caching, based on visibility and possible shader can precache the LOD needed its filling cache procedure. But game-engine needs more, way more.
I suppose there are functions that monitor each piece of "shader" and let it adopt to the scene...? Like invalidating every N-milliseconds for particles for example,
and killing particles that are not in view for M-milliseconds...
5. How can artist, preview the results of their work - they create a geometry, assign some properties and want to see the result - how it is done?
How fast it will be, and how accurate they can tune it?
Because if we talk about competitivity, here it is.
6. How precise we can control the quality of the shading system on different hardware? For example, switching off gloss/reflectivity on terrain, but keeping it on tanks on 8500?
Here the logic - we tune shaders, based on our knowledge of vision of landscape and tanks (tanks are quite important, in example). I mean - this is decidion,
based on our game world - and we don't want to let the shader resolving system to dictate the fallbacks, but our artists.
How (easy) this can be done in your system, maybe some practical approach exists?
7. Effect description is given outside, and once for an object - how can we tune our shader, to accomodate for *very* different shading scenatios -
sunlight outside, and projected sunbeams inside buildings for example - only SG knows where to apply each - how this knowledge will result in proper
shading? This is rather pipeline example, cos it will include inside/outside volumes, projection textures and some logic involved in renderer, maybe custom
for given game.
8. How are RT-s shared between shaders? Because the shaders (again, because of the Idea :)), has very egocentric view of the resources (unless this is solved
from a system above them) - I'm interested in who allocate global shadowmap for PSM-shadowmap for example, and who computes the matrix for it.
It will be probably later used by many, many shaders at will, right?
Shadowmap RPASS_ is called, but who from all these shaders will compute the matrix? Or this is the Tools responsibility? If so, how can we introduce new shadowmap
technique, or even more than one, with conditional use of the best of them (based on view), for example?
9. Shadow volumes - who build them, and how? Can they be preprocessed, and stored with the data....? Same concern, as above, with the exception, that shadow
volumes need really more data to be preprocessed and stored (for static lighting, using them), and can really involve some heavy processing on whole pieces of level
- how this is connected with shaders/rendering-features?
How your system will adopt to scheme, where we don't render cbm for every object, but instead use nearest one, precomputed cbm, and we have many of them
spread through the level (HL2-style)?
10. Can these plugins, be plugged into editor, art-pipeline also, to process/create streaming content? If they can, what is just basic idea of the interface, and how
they share that data (or communicate each with other, to form the final data layout of the level)?
11. You said, in original thread: "By avoiding to touch the core, you also avoid breaking other parts of your code as you add new features. You don't need knowledge about the engine
internals either, everything runs over standarized interfaces."
So - can or can't you change rendering pipeline, so radically as introducing predicative-rendering? And what are these standartized interfaces, that allow you
to do so?
12. Shaders drops from being used by the system, if they can't be rendered on current hardware. But some shaders have multiple passes, like the shader, used in
example of reflective-refractive water. A valid fallback could be to drop refraction for example. How we do that, with a system, where the whole shader will
be dropped, together with its passes (because it is registered in the system as a whole piece)?
It is more work, to provide shaders, that can fallback one or two aspects of the top-level shader.
I mean - a text-based shader system can solve that quite naturally.
So, I just want to clarify that system, because as it seems it is quite more complex and versatile, that is shown in original Material/Shader thread, if it can override rendering pipeline like you said.
Thanks, if you even read this to the end :)