shader system implementation

Started by
44 comments, last by zedzeek 18 years, 7 months ago
Quote:Original post by Zemedelec
I was talking about the shader-into-DLL approach,

Me too. The plugin shader concept is fully able to do what you desribed.

Quote:Original post by Zemedelec
As for DLL updating as is, a rant:
Software, that do that and have plugin structure have serious requirements to do so - it is developed often by large teams, that create/update parts independantly.

And games aren't ?

Quote:
And for other concerns, like stability. Nothing of that concerns the typical game scenario, games imho are not so big software.

Excuse me, but this made me laugh ! Look, the software industry, and the game industry even more so, is a very competitive market. Budgets are becoming larger every year, and the code complexity of games increases. But the timescale to finish a game gets shorter and shorter. The market forces this.

In order to survive, you need to find innovative solutions to increase both quality and productivity. Of course, you could code a game the "old school way", with hardcoded and manually optimized render paths for each effect and each target hardware. It would probably even take less time than creating a flexible plugin based approach, if starting the latter from scratch. So you sell your game, and everything is nice and fine - until a year or two later, when you need to release the next game. Unfortunately, hardware has changed a lot, so you need to completely rewrite your hardcoded engine: new effects, new hardware features, shifted bottlenecks, new shader structuration. If, however, you invested the time into the plugin system, updating your engine to the newest standards is a breeze, without even sacrificing backwards compatibility for older hardware.

Reusability is the key word. Todays and tomorrows development must target reusable frameworks that are easily extendable and scalable over time, maybe even by completely different development teams. Think third party licensing, for example. With your hardcoded solutions, you won't go anywhere in the future, especially not from a financial point of view. Until you have adjusted your hardcoded shader paths to the new hardware requirements, your competitor has updated a few DLLs (or static libs) and is already selling his brand new eyecandy ladden game.

Quote:
If we target system complexity, that could be good point to start... :)

Don't underestimate the complexity of a modern game.

Quote:
My personal decidion would be not to trade the design/implement time for pluggable system that can update everything, for the sake of uploading 1.5M more. I would go for stable-old (KISS) solution... but its just me, yes.

It's not about filesize, it's about competitivity and scalability.

Quote:
Can't see how this is more predictable and easy-to-develop for a typical not-shader intensive game.

In what time are you living ? Typical not shader intensive games ? All new 3D engines targeted at Dx9/10 cards, XBox 360, PS3, etc, almost drown in shaders ! The time of not-shader intensive games is long over.

Quote:
For me, it's like having quite many renderpaths, and being unable to guarantee the visual quality in one of them. Maybe it is suitable for software, but for games - can't agree.

I think I have the better argument here: I actually have such a system running on a commercial base for a couple of years now :) And it works very well. No, it might not be for a game - but our software has very similar requirements compared to a very high end game from the graphical side. In fact, you could probably turn the application into a game quite easily, if you have the artwork, change the interface and add AI.

The current system might not be what we would like to see in the medium term future (as I mentioned above, we're looking more into meta shaders), but we will certainly not go back into the stoneage of hardcoded renderpath graphics development.

My suggestion: just try it out before bashing it. You might be surprised about what extreme flexiblity it can offer you.
Advertisement
Quote:Original post by Yann L
Me too. The plugin shader concept is fully able to do what you desribed.


So, DLL-based shaders are able to interact to the point of adding new features to the world, like grass, light-shafts, etc....?
Or I missed something?

Quote:Original post by Yann L
And games aren't ?


Most games - aren't. *Some* of the licenseable *engines* are. But their business modell just requires it, hands down.

Quote:Original post by Yann L
In order to survive, you need to find innovative solutions to increase both quality and productivity. Of course, you could code a game the "old school way", with hardcoded and manually optimized render paths for each effect and each target hardware. It would probably even take less time than creating a flexible plugin based approach, if starting the latter from scratch. So you sell your game, and everything is nice and fine - until a year or two later, when you need to release the next game. Unfortunately, hardware has changed a lot, so you need to completely rewrite your hardcoded engine: new effects, new hardware features, shifted bottlenecks, new shader structuration. If, however, you invested the time into the plugin system, updating your engine to the newest standards is a breeze, without even sacrificing backwards compatibility for older hardware.


Emotions aside - where did I point that "my" approach is hardcoding anything anywhere...?
I suggested (above on this page), that it is enough for a shader system to be data-driven, and not become plugin-based code-driven. It can adapt to different hardware, and different scene requrements quite nicely. It can declare & use unique resources like its own textures quite well.
Currently my implementation can not declare new complex vertex declarations, where things are packed crazily, but I can't see how your system will do that either - creating very special geometry, like grass/clouds/particles where vertices have very special format, beyond (texcoordN, colorY,...).

Quote:Original post by Yann L
Reusability is the key word. Todays and tomorrows development must target reusable frameworks that are easily extendable and scalable over time, maybe even by completely different development teams. Think third party licensing, for example. With your hardcoded solutions, you won't go anywhere in the future, especially not from a financial point of view. Until you have adjusted your hardcoded shader paths to the new hardware requirements, your competitor has updated a few DLLs (or static libs) and is already selling his brand new eyecandy ladden game.


Again, I fail to see where I present my solution as hardcoded... :)
And, by the way - have you seen some of the licenseable engines, that have leaked somehow. Those of them, that are based on games, have quite a lot source that is not plugin-based, nor is very clean.

Quote:Original post by Yann L
Don't underestimate the complexity of a modern game.


I don't. Complexity of games it high, but it is destributed very wide between many, very different components.
As for the code-size, it rarely comes even close to the complexity of modeling packages for example.


Quote:Original post by Yann L
In what time are you living ? Typical not shader intensive games ? All new 3D engines targeted at Dx9/10 cards, XBox 360, PS3, etc, almost drown in shaders ! The time of not-shader intensive games is long over.


I'm talking something like 30-40 shaders. Including new and old hardware. (Without new consoles, have yet to see them).
Such a quantity of shaders is quite enough for very large number of games.
With reasonably fixed lighting scheme. And know games, that used less, still looking amazing. Take for example World of Warcraft.

Quote:Original post by Yann L
I think I have the better argument here: I actually have such a system running on a commercial base for a couple of years now :) And it works very well. No, it might not be for a game - but our software has very similar requirements compared to a very high end game from the graphical side. In fact, you could probably turn the application into a game quite easily, if you have the artwork, change the interface and add AI.


I never said, I don't believe in creation/existance of such a system. Not even, that it is flawed in some way. I personally tried to design and implement similar system, maybe 2 years ago. Now I'm not quite sure, that it's worth the efford and interations to fine-tune it.

Quote:Original post by Yann L
My suggestion: just try it out before bashing it. You might be surprised about what extreme flexiblity it can offer you.


I think about enhancing our system now, but in a slightly other way.
My arguments against such a system?
- People that hipothetically licensed our engine, can add new shaders/effect quite easily - plugging something new is straightforward. Touching the source is needed only when new vertex declarations must be introduced.
- I need to make it easy to develop, thus competitive by making strong and efficiet art pipeline. Making artists happy, and let them touch and modify shaders, at will - and make it as intuitive, as possible. An artist can make wonders even with blending states. Programmer can rarely make something beautifull, even with SM3.

So, the main research area was for the opening the shaders for the art pipeline, and offload the pixels to the people, whom they belong - artists... :)
UE3 shader editor is good example of the new trend.

P.S.: One thing I forgot to comment - you said "shifted bottlenecks, new features".
HW generations change not so quick, and adopting to them is quite best to make in the core engine itself - I can't imaging how a tiny-little thing as a shader (shading scheme of a surface, at the end) can adopt itself to something like predicative-rendering for example. It will be needed to (a) redesigning some subsystems of the rendering engine, (b) design the system, already knowing what future will bring. I.e. - new generations always tend to cause rewriting/redesigning, as reality shows with many, many examples.
Quote:Original post by Basiror
now lets say you simply render your scnene with pure vertex arrays and basic opengl lighting so the shader above fullfits your needs
...
however the auther wants to use dot3 bumpmapping so he has to tell the render what he needs


Try one more example - a grass, that is represented ingame, as a 2D-array containing density/type, and (possibly) need to pack that into small, packed vertices, that hold things like vertex-offset from the center of grass quads, or sin/cos values, packed in .w components of positions for particles.
How these will be described?

Quote:Original post by Basiror
the TBN needs to be calculated ....


The TBN isn't something you'll want to precalculate at load-time. It causes vertex-splits, thus making mesh (slightly) not so optimal, i.e. - after TBN-computation one would want to optimize the mesh, so making TBN-computation a preprocess step.

Even more important - developing for consoles, and making streaming engines, means load inplace resources - directly into memory, without any precious time for *any* preprocessing.

So, if design forces you to recompute such obvious preprocess things like TBN - it is not so good design.

P.S.: And, it is good to separate shader, from direct descriptions of its instance - so, that texture names, that will be unique for all the clients of that shader, must reside somewhere else...

P.S.2: And think about, wanting to render one mesh, with many shaders (at one time, and maybe during the gameplay) - what will you load, how will you process that mesh, and how many instances will you create?
Quote:Original post by Zemedelec
So, DLL-based shaders are able to interact to the point of adding new features to the world, like grass, light-shafts, etc....?
Or I missed something?

No offense, but I would really suggest you try to understand the system we're talking about before discussing its supposed shortcomings.

To answer your question: of course the system can add these effects - that's the whole idea of a plugin system ! All effects in our current engine - lightshafts, grass, procedural vegetation, parametric terrain, water, fire, clouds, atmosphere, halos, billboards, fractals, and so on - are exclusively rendered by the use of plugin shaders. I even added several raytracing modules as shaders (for a test, because they were horribly slow ;), eventhough the underlying rendering model is completely different.

You seem to think that the plugin architecture merely mimics a kind of .FX file in code. Well, that would be rather stupid, wouldn't it ? Instead, it contains pluggable micro render cores.

As I said before: a shader is more than just a piece of GLSL or HLSL code. It's a system that describes the visual appearance of an object or effect. A shader can generate geometry and modify it. It can read from, create and destroy light sources. It can apply animation, LOD systems, or evaluate procedural and fractal geometry.

I think we're really talking about two completely different systems here.

Quote:Original post by Yann L
I suggested (above on this page), that it is enough for a shader system to be data-driven, and not become plugin-based code-driven. It can adapt to different hardware, and different scene requrements quite nicely. It can declare & use unique resources like its own textures quite well.
Currently my implementation can not declare new complex vertex declarations, where things are packed crazily, but I can't see how your system will do that either - creating very special geometry, like grass/clouds/particles where vertices have very special format, beyond (texcoordN, colorY,...).

It goes far beyond the vertex format. This is just a minor detail, and of course a plugin based approach can generate and convert between any vertex formats you can imagine. We even use it to decompress large amounts of vertex data through zlib on the fly, within a shader ! Try to do that with a data driven approach...

Maybe I don't really understand what you're doing either, so please correct me if I'm wrong, but your system sounds a lot like a Quake3 style engine to me. Sure, that works. But is it ready for the future ? Nope.

I agree that a full plugin system is a poor choice for a beginner, as the complexity to implement the framework is overwhelming. But for an advanced amateur (and of course for the professional developer), this will definitely pay off. It becomes more and more difficult for small businesses or indie game developers to keep up with technical developments in the hardware sector. A plugin based system can make this much, much easier.

Quote:
Such a quantity of shaders is quite enough for very large number of games.
With reasonably fixed lighting scheme. And know games, that used less, still looking amazing. Take for example World of Warcraft.

We seem to have a different definition of "amazing" ;)

Quote:
HW generations change not so quick, and adopting to them is quite best to make in the core engine itself

No, it isn't. That's pretty much the worst approach there is.

Quote:
- I can't imaging how a tiny-little thing as a shader (shading scheme of a surface, at the end) can adopt itself to something like predicative-rendering for example.

As I said, please read up on the system again before making incorrect assumptions. We are not talking about a simple surface description here !

Quote:
It will be needed to (a) redesigning some subsystems of the rendering engine,

That's exactly what the micro render cores do. Divide and conquer - you add features as they come in. Oh look, I read about this new displacement mapping shader in a research paper a few days ago. I would just write it as a plugin, compile it to a DLL, and copy it into my engines plugin directory. And voilà, that's it. Even if that shader would completely modify the standard render pipeline - because in my approach, there is no standard pipeline !

By avoiding to touch the core, you also avoid breaking other parts of your code as you add new features. You don't need knowledge about the engine internals either, everything runs over standarized interfaces. So new effects (even those that would require a substancial modification of the render pipeline in your system) can be added without hassle, by several different people, or be contributed by third parties.

So, plugins are a perfect middle way between old school unflexible pipelines, and the complete abstraction of the rendering system into meta shaders. Once we have well working meta shaders (we will probably need hardware supported JIT compilers for that), we can just trash the plugin approach. And I'll be happy about it, because the system has in fact several drawbacks. Just not the ones you were thinking of :)
Lol, I hate this, after about a year of developing I'm close to completing my next incarnation of shader system, and I am being/have been convinved I've taken the wrong course of action. Again. Yay! [wink].
If at first you don't succeed, redefine success.
@Zemedelec:

ok lets say you have a terrain mesh with 4 texture layers and a density map for grass

inthe shader description you could add the density description

shader someterrainshader
{
grassdensity "xyz.raw" //just a perlin noise with several octaves and a
//exponential filter applied

cvar "gfx_drawgrass 1";//at loadtime a cvar test is performed to see if grass
// rendering is enabled at all
}
now as i mentioned in one of my earliers posts here the engine had to offer a api so the shader.dlls can interact with object hierarchy and maybe add new objects on demand


in a preprocess the shader would be called with the terrain batch
it retrieves the grassdensity .raw file sets up a new mesh or renderable object
places all the information about the grass quads or however you render your grass into this renderable object

i think a common way to render grass is to create a vbo + some density factor for scaling the quads so if the density is too low it simply renders a
zero square quad which is skipped by the OGL or D3D implementation

so the renderable object would look like this

renderable grass
{
static VBO id;//a single set of quad representations for all grass batches
density map
}

and this renderable object is put into the hierarchy scenegraph/octree


that way and you keep it open for the modder to define the appearance of the grass, the quad density per grass batch ......

as you see you describe the appearance of a pure terrain batch


and thats the advantage of this system there is no need to store this information anywhere in a map file
the preprocess can quickly be performed at loadtime

the only thing that takes some time is the implementation of a api that allows you dlls to interact with the engine core to setup or manipulate existing objects


there s a streaming concept with delayed evaluation, i know this from scheme:
- what it does is evaluating a expression at runtime to supply the caller with the desired data/information

thats compareable with this shader plugin for grass, you only create the information if the caller (the renderer) needs it
(see the cvar included in the example above)


http://www.8ung.at/basiror/theironcross.html
So, plugins are a perfect middle way between old school unflexible pipelines, and the complete abstraction of the rendering system into meta shaders. Once we have well working meta shaders (we will probably need hardware supported JIT compilers for that), we can just trash the plugin approach. And I'll be happy about it, because the system has in fact several drawbacks. Just not the ones you were thinking of :)


How will your meta shaders look like ?
I have read the "Material/Shader Implementation" thread but I still can't understand how the shaders can control how many passes is needed and what to do in each pass and how to blend the passes together, and how to render a depthmap for shadows, or render to stencilbuffer for stencilbuffer-shadows

I modified the implementation a bit but I can't figure out how to create several passes and how to implement shadowmaps..

Besides that I really like the idea, I will like it more when my implementation of it works better ;)
Quote:Original post by Basiror
shader someterrainshader
{
grassdensity "xyz.raw" //just a perlin noise with several octaves and a
//exponential filter applied

cvar "gfx_drawgrass 1";//at loadtime a cvar test is performed to see if grass
// rendering is enabled at all
}

First of all - why *some* grass density is placed in the shader, to start with?
This is per-scene data.
I'm not trying to be anal, but you keep to specify local parameters in the shader definitions all the time.. :)

Quote:Original post by Basiror
now as i mentioned in one of my earliers posts here the engine had to offer a api so the shader.dlls can interact with object hierarchy and maybe add new objects on demand

Yes, I understand that without that, "shaders" can't do much to change the world.
But, AFAIR, they have just Init/Shutdown interface, and rendering interface.
Grass needs some more, you know:
- a way to spawn nodes, and create them in runtime, while we move around.
- a way to destroy old nodes, that are far away.
- a way to expose the grass for some interaction - adding new grass during the gameplay, altering the grass by physics, etc.
So, it must expose quite a fat interface, for interaction - grass typically isn't something static and prebuild (it can be obviously, but this is just *some* grass), it has complex management, that also depends on the viewer position. Which is far far away from "shader", and maybe a bit far from "visual appearance" too, since you know - walking over grass can emit specific sounds for example... :)


Quote:Original post by Basiror
and this renderable object is put into the hierarchy scenegraph/octree

and thats the advantage of this system there is no need to store this information anywhere in a map file
the preprocess can quickly be performed at loadtime

The more you preprocess at load-time, the more away you go from consoles.
I said it before, but will say it again - if your system design forces you to do things that can be preprocessed at load-time, then you must have a good reason to do so, very good reason.
Doom 3 for example, do that to be more artist/level-designer friendly, and its loadtimes are rediculous at times, but bearable (for PC only).
They have their reason.
Do you?

Quote:Original post by Basiror
there s a streaming concept with delayed evaluation, i know this from scheme:
- what it does is evaluating a expression at runtime to supply the caller with the desired data/information
thats compareable with this shader plugin for grass, you only create the information if the caller (the renderer) needs it
(see the cvar included in the example above)


My most significant misunderstanding is: grass/shader/tiny-little-local-plugged thingy can't know better compared to whole scene graph, about when to render, when to preprocess, when to switch geometry LOD, shader LOD, how to control its quality, etc.
So, SG must post messages to the grass-plugin, for that grass plugin to react correctly and tune itself for the needs of the renderer.
That's quite an interface, I'l say. Show me an example of it, I can't find even a bit in the original and subsequent threads...
Because I'm very interested.
Quote:Original post by Yann L


I have read the original thread again, there were things I forgot. Have read my questions back then too, some of them still unanswered...

I totally agree that everything is possible, when the "shaders" implemented as plugings, can plug to many
points of the art pipeline, to the point of preprocessing geometry, even many shaders sharing same geometry pieces on disk (per-component, if they requirements
intersect), to change the rendering process and pipeline completely - to the point of interacting with HSR-process, adding new geometry to the scene (and for
physics), adding new properties/GUI/code to the editor. Then everything is possible.
I can't evaluate the complexity of such a system - never seen/hear of one (even in action), nor just barely thought to design such.
It is just way above my current design level... :(

But such a plugin system is far from shader-DLL terminology imo. This is clarification for all these people, who dive into this approach, didn't knowing
they make mini-engine-plugins, not shaders... ;)

And, by the way (I was off this forum for a long time lately) - have anybody implemented successfully such a
system, you described, and used it with success? I just don't know of any and I am very interested to hear that it was made by somebody else and works ok...
Where are all the fans, who participated in the famous "Material/Shader..." thread? Only python_regious is here.
I'd like to hear what they have done, it would be interesing approval of this design.


Now, my concerns, in summary:

Each system is born around some idea, and that idea limits the resulting system and gives it wings, at same time.
The idea of pluggable render-features, is to describe abstractly pieces of space, then resolve that describtion into set of registered interfaces, that can
create/render the thing.
On a primitive level, we can construct some shading techniques, and later combine them for more complex effects (lets forget for little, about grass/light-shafts).

So, my concerns:

1. We have fixed shaders/shading procedures -> abstract effect decomposition means more shaders, to the very low level having more ps/vs-s for execution of the effect.
Diffuse + Specular + Bump for example - if they are distinct effect, combining them you lead to more that 1 pass.
Of course we will have that into single shader - well... that's more work, to create all that combinations. And that's a thing that almost every current
engine tries/succeeds in avoiding.
The example is quite plain, but the idea is - building shaders from text, then compiling the result depending on what we want from the shader will get better ps/vs decomposition for example,
compared to the decomposition from already compiled shaders.

2. How shaders adopt to changing conditions - lights, fog, scene lighting (day <-> night can change lighting model a bit). Changing visual mode (night-vision, etc.)
I read the explanation, about the RPASS_PRELIGHT. But how that pass executes actual lighting, who pick-up the right set of shaders -
the same decomposition logic? But the lighting conditions can be vary a lot - from single shadowmapped directional + 3-4 points, to many point+spot lights and
some diffuse cbm-s at night.
Again - I see many passes or many work to implement that on pluggable system.
Text-composing system will handle that quite nicely and easy.

3. Next, how single piece of geometry (single piece - visually, not in memory) is rendered with multiple shaders - be it lighting conditions, user-specified parameters
like object being on-fire, damaged, transparent, etc. - i.e. gameplay changes requiring shader change.
How is the geometry used between shaders?
Because at heart this system relies on every single piece of pluggable rendering-feature (aka "shader") to prebuild its geometry for itself.
But geometry needs to be rendered by many shaders - how they share it, or they don't?
Streams?

4. You said, that adding new functionality, like grass, light-shafts, water - is possible, right? How your plugins interact with physics and gameplay-side
of the application, what interface is there, to allow that?
Falling objects into the water, can produce waves round them - is that possible with that approach?
And how objects are managed (when far-away, to fade, to switch LOD, to be recreated when comes near the viewer, etc.)
I saw only caching, based on visibility and possible shader can precache the LOD needed its filling cache procedure. But game-engine needs more, way more.
I suppose there are functions that monitor each piece of "shader" and let it adopt to the scene...? Like invalidating every N-milliseconds for particles for example,
and killing particles that are not in view for M-milliseconds...

5. How can artist, preview the results of their work - they create a geometry, assign some properties and want to see the result - how it is done?
How fast it will be, and how accurate they can tune it?
Because if we talk about competitivity, here it is.

6. How precise we can control the quality of the shading system on different hardware? For example, switching off gloss/reflectivity on terrain, but keeping it on tanks on 8500?
Here the logic - we tune shaders, based on our knowledge of vision of landscape and tanks (tanks are quite important, in example). I mean - this is decidion,
based on our game world - and we don't want to let the shader resolving system to dictate the fallbacks, but our artists.
How (easy) this can be done in your system, maybe some practical approach exists?

7. Effect description is given outside, and once for an object - how can we tune our shader, to accomodate for *very* different shading scenatios -
sunlight outside, and projected sunbeams inside buildings for example - only SG knows where to apply each - how this knowledge will result in proper
shading? This is rather pipeline example, cos it will include inside/outside volumes, projection textures and some logic involved in renderer, maybe custom
for given game.

8. How are RT-s shared between shaders? Because the shaders (again, because of the Idea :)), has very egocentric view of the resources (unless this is solved
from a system above them) - I'm interested in who allocate global shadowmap for PSM-shadowmap for example, and who computes the matrix for it.
It will be probably later used by many, many shaders at will, right?
Shadowmap RPASS_ is called, but who from all these shaders will compute the matrix? Or this is the Tools responsibility? If so, how can we introduce new shadowmap
technique, or even more than one, with conditional use of the best of them (based on view), for example?

9. Shadow volumes - who build them, and how? Can they be preprocessed, and stored with the data....? Same concern, as above, with the exception, that shadow
volumes need really more data to be preprocessed and stored (for static lighting, using them), and can really involve some heavy processing on whole pieces of level
- how this is connected with shaders/rendering-features?
How your system will adopt to scheme, where we don't render cbm for every object, but instead use nearest one, precomputed cbm, and we have many of them
spread through the level (HL2-style)?

10. Can these plugins, be plugged into editor, art-pipeline also, to process/create streaming content? If they can, what is just basic idea of the interface, and how
they share that data (or communicate each with other, to form the final data layout of the level)?

11. You said, in original thread: "By avoiding to touch the core, you also avoid breaking other parts of your code as you add new features. You don't need knowledge about the engine
internals either, everything runs over standarized interfaces."
So - can or can't you change rendering pipeline, so radically as introducing predicative-rendering? And what are these standartized interfaces, that allow you
to do so?

12. Shaders drops from being used by the system, if they can't be rendered on current hardware. But some shaders have multiple passes, like the shader, used in
example of reflective-refractive water. A valid fallback could be to drop refraction for example. How we do that, with a system, where the whole shader will
be dropped, together with its passes (because it is registered in the system as a whole piece)?
It is more work, to provide shaders, that can fallback one or two aspects of the top-level shader.
I mean - a text-based shader system can solve that quite naturally.

So, I just want to clarify that system, because as it seems it is quite more complex and versatile, that is shown in original Material/Shader thread, if it can override rendering pipeline like you said.
Thanks, if you even read this to the end :)

This topic is closed to new replies.

Advertisement