Jump to content

  • Log In with Google      Sign In   
  • Create Account


shader system implementation


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
45 replies to this topic

#21 Yann L   Moderators   -  Reputation: 1794

Like
0Likes
Like

Posted 17 September 2005 - 05:47 AM

Quote:
Original post by Zemedelec
I was talking about the shader-into-DLL approach,

Me too. The plugin shader concept is fully able to do what you desribed.

Quote:
Original post by Zemedelec
As for DLL updating as is, a rant:
Software, that do that and have plugin structure have serious requirements to do so - it is developed often by large teams, that create/update parts independantly.

And games aren't ?

Quote:

And for other concerns, like stability. Nothing of that concerns the typical game scenario, games imho are not so big software.

Excuse me, but this made me laugh ! Look, the software industry, and the game industry even more so, is a very competitive market. Budgets are becoming larger every year, and the code complexity of games increases. But the timescale to finish a game gets shorter and shorter. The market forces this.

In order to survive, you need to find innovative solutions to increase both quality and productivity. Of course, you could code a game the "old school way", with hardcoded and manually optimized render paths for each effect and each target hardware. It would probably even take less time than creating a flexible plugin based approach, if starting the latter from scratch. So you sell your game, and everything is nice and fine - until a year or two later, when you need to release the next game. Unfortunately, hardware has changed a lot, so you need to completely rewrite your hardcoded engine: new effects, new hardware features, shifted bottlenecks, new shader structuration. If, however, you invested the time into the plugin system, updating your engine to the newest standards is a breeze, without even sacrificing backwards compatibility for older hardware.

Reusability is the key word. Todays and tomorrows development must target reusable frameworks that are easily extendable and scalable over time, maybe even by completely different development teams. Think third party licensing, for example. With your hardcoded solutions, you won't go anywhere in the future, especially not from a financial point of view. Until you have adjusted your hardcoded shader paths to the new hardware requirements, your competitor has updated a few DLLs (or static libs) and is already selling his brand new eyecandy ladden game.

Quote:

If we target system complexity, that could be good point to start... :)

Don't underestimate the complexity of a modern game.

Quote:

My personal decidion would be not to trade the design/implement time for pluggable system that can update everything, for the sake of uploading 1.5M more. I would go for stable-old (KISS) solution... but its just me, yes.

It's not about filesize, it's about competitivity and scalability.

Quote:

Can't see how this is more predictable and easy-to-develop for a typical not-shader intensive game.

In what time are you living ? Typical not shader intensive games ? All new 3D engines targeted at Dx9/10 cards, XBox 360, PS3, etc, almost drown in shaders ! The time of not-shader intensive games is long over.

Quote:

For me, it's like having quite many renderpaths, and being unable to guarantee the visual quality in one of them. Maybe it is suitable for software, but for games - can't agree.

I think I have the better argument here: I actually have such a system running on a commercial base for a couple of years now :) And it works very well. No, it might not be for a game - but our software has very similar requirements compared to a very high end game from the graphical side. In fact, you could probably turn the application into a game quite easily, if you have the artwork, change the interface and add AI.

The current system might not be what we would like to see in the medium term future (as I mentioned above, we're looking more into meta shaders), but we will certainly not go back into the stoneage of hardcoded renderpath graphics development.

My suggestion: just try it out before bashing it. You might be surprised about what extreme flexiblity it can offer you.

Sponsor:

#22 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 17 September 2005 - 10:11 AM

Quote:
Original post by Yann L
Me too. The plugin shader concept is fully able to do what you desribed.


So, DLL-based shaders are able to interact to the point of adding new features to the world, like grass, light-shafts, etc....?
Or I missed something?

Quote:
Original post by Yann L
And games aren't ?


Most games - aren't. *Some* of the licenseable *engines* are. But their business modell just requires it, hands down.

Quote:
Original post by Yann L
In order to survive, you need to find innovative solutions to increase both quality and productivity. Of course, you could code a game the "old school way", with hardcoded and manually optimized render paths for each effect and each target hardware. It would probably even take less time than creating a flexible plugin based approach, if starting the latter from scratch. So you sell your game, and everything is nice and fine - until a year or two later, when you need to release the next game. Unfortunately, hardware has changed a lot, so you need to completely rewrite your hardcoded engine: new effects, new hardware features, shifted bottlenecks, new shader structuration. If, however, you invested the time into the plugin system, updating your engine to the newest standards is a breeze, without even sacrificing backwards compatibility for older hardware.


Emotions aside - where did I point that "my" approach is hardcoding anything anywhere...?
I suggested (above on this page), that it is enough for a shader system to be data-driven, and not become plugin-based code-driven. It can adapt to different hardware, and different scene requrements quite nicely. It can declare & use unique resources like its own textures quite well.
Currently my implementation can not declare new complex vertex declarations, where things are packed crazily, but I can't see how your system will do that either - creating very special geometry, like grass/clouds/particles where vertices have very special format, beyond (texcoordN, colorY,...).

Quote:
Original post by Yann L
Reusability is the key word. Todays and tomorrows development must target reusable frameworks that are easily extendable and scalable over time, maybe even by completely different development teams. Think third party licensing, for example. With your hardcoded solutions, you won't go anywhere in the future, especially not from a financial point of view. Until you have adjusted your hardcoded shader paths to the new hardware requirements, your competitor has updated a few DLLs (or static libs) and is already selling his brand new eyecandy ladden game.


Again, I fail to see where I present my solution as hardcoded... :)
And, by the way - have you seen some of the licenseable engines, that have leaked somehow. Those of them, that are based on games, have quite a lot source that is not plugin-based, nor is very clean.

Quote:
Original post by Yann L
Don't underestimate the complexity of a modern game.


I don't. Complexity of games it high, but it is destributed very wide between many, very different components.
As for the code-size, it rarely comes even close to the complexity of modeling packages for example.


Quote:
Original post by Yann L
In what time are you living ? Typical not shader intensive games ? All new 3D engines targeted at Dx9/10 cards, XBox 360, PS3, etc, almost drown in shaders ! The time of not-shader intensive games is long over.


I'm talking something like 30-40 shaders. Including new and old hardware. (Without new consoles, have yet to see them).
Such a quantity of shaders is quite enough for very large number of games.
With reasonably fixed lighting scheme. And know games, that used less, still looking amazing. Take for example World of Warcraft.

Quote:
Original post by Yann L
I think I have the better argument here: I actually have such a system running on a commercial base for a couple of years now :) And it works very well. No, it might not be for a game - but our software has very similar requirements compared to a very high end game from the graphical side. In fact, you could probably turn the application into a game quite easily, if you have the artwork, change the interface and add AI.


I never said, I don't believe in creation/existance of such a system. Not even, that it is flawed in some way. I personally tried to design and implement similar system, maybe 2 years ago. Now I'm not quite sure, that it's worth the efford and interations to fine-tune it.

Quote:
Original post by Yann L
My suggestion: just try it out before bashing it. You might be surprised about what extreme flexiblity it can offer you.


I think about enhancing our system now, but in a slightly other way.
My arguments against such a system?
- People that hipothetically licensed our engine, can add new shaders/effect quite easily - plugging something new is straightforward. Touching the source is needed only when new vertex declarations must be introduced.
- I need to make it easy to develop, thus competitive by making strong and efficiet art pipeline. Making artists happy, and let them touch and modify shaders, at will - and make it as intuitive, as possible. An artist can make wonders even with blending states. Programmer can rarely make something beautifull, even with SM3.

So, the main research area was for the opening the shaders for the art pipeline, and offload the pixels to the people, whom they belong - artists... :)
UE3 shader editor is good example of the new trend.

P.S.: One thing I forgot to comment - you said "shifted bottlenecks, new features".
HW generations change not so quick, and adopting to them is quite best to make in the core engine itself - I can't imaging how a tiny-little thing as a shader (shading scheme of a surface, at the end) can adopt itself to something like predicative-rendering for example. It will be needed to (a) redesigning some subsystems of the rendering engine, (b) design the system, already knowing what future will bring. I.e. - new generations always tend to cause rewriting/redesigning, as reality shows with many, many examples.

#23 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 17 September 2005 - 10:21 AM

Quote:
Original post by Basiror
now lets say you simply render your scnene with pure vertex arrays and basic opengl lighting so the shader above fullfits your needs
...
however the auther wants to use dot3 bumpmapping so he has to tell the render what he needs


Try one more example - a grass, that is represented ingame, as a 2D-array containing density/type, and (possibly) need to pack that into small, packed vertices, that hold things like vertex-offset from the center of grass quads, or sin/cos values, packed in .w components of positions for particles.
How these will be described?

Quote:
Original post by Basiror
the TBN needs to be calculated ....


The TBN isn't something you'll want to precalculate at load-time. It causes vertex-splits, thus making mesh (slightly) not so optimal, i.e. - after TBN-computation one would want to optimize the mesh, so making TBN-computation a preprocess step.

Even more important - developing for consoles, and making streaming engines, means load inplace resources - directly into memory, without any precious time for *any* preprocessing.

So, if design forces you to recompute such obvious preprocess things like TBN - it is not so good design.

P.S.: And, it is good to separate shader, from direct descriptions of its instance - so, that texture names, that will be unique for all the clients of that shader, must reside somewhere else...

P.S.2: And think about, wanting to render one mesh, with many shaders (at one time, and maybe during the gameplay) - what will you load, how will you process that mesh, and how many instances will you create?


#24 Yann L   Moderators   -  Reputation: 1794

Like
0Likes
Like

Posted 17 September 2005 - 11:55 AM

Quote:
Original post by Zemedelec
So, DLL-based shaders are able to interact to the point of adding new features to the world, like grass, light-shafts, etc....?
Or I missed something?

No offense, but I would really suggest you try to understand the system we're talking about before discussing its supposed shortcomings.

To answer your question: of course the system can add these effects - that's the whole idea of a plugin system ! All effects in our current engine - lightshafts, grass, procedural vegetation, parametric terrain, water, fire, clouds, atmosphere, halos, billboards, fractals, and so on - are exclusively rendered by the use of plugin shaders. I even added several raytracing modules as shaders (for a test, because they were horribly slow ;), eventhough the underlying rendering model is completely different.

You seem to think that the plugin architecture merely mimics a kind of .FX file in code. Well, that would be rather stupid, wouldn't it ? Instead, it contains pluggable micro render cores.

As I said before: a shader is more than just a piece of GLSL or HLSL code. It's a system that describes the visual appearance of an object or effect. A shader can generate geometry and modify it. It can read from, create and destroy light sources. It can apply animation, LOD systems, or evaluate procedural and fractal geometry.

I think we're really talking about two completely different systems here.

Quote:
Original post by Yann L
I suggested (above on this page), that it is enough for a shader system to be data-driven, and not become plugin-based code-driven. It can adapt to different hardware, and different scene requrements quite nicely. It can declare & use unique resources like its own textures quite well.
Currently my implementation can not declare new complex vertex declarations, where things are packed crazily, but I can't see how your system will do that either - creating very special geometry, like grass/clouds/particles where vertices have very special format, beyond (texcoordN, colorY,...).

It goes far beyond the vertex format. This is just a minor detail, and of course a plugin based approach can generate and convert between any vertex formats you can imagine. We even use it to decompress large amounts of vertex data through zlib on the fly, within a shader ! Try to do that with a data driven approach...

Maybe I don't really understand what you're doing either, so please correct me if I'm wrong, but your system sounds a lot like a Quake3 style engine to me. Sure, that works. But is it ready for the future ? Nope.

I agree that a full plugin system is a poor choice for a beginner, as the complexity to implement the framework is overwhelming. But for an advanced amateur (and of course for the professional developer), this will definitely pay off. It becomes more and more difficult for small businesses or indie game developers to keep up with technical developments in the hardware sector. A plugin based system can make this much, much easier.

Quote:

Such a quantity of shaders is quite enough for very large number of games.
With reasonably fixed lighting scheme. And know games, that used less, still looking amazing. Take for example World of Warcraft.

We seem to have a different definition of "amazing" ;)

Quote:

HW generations change not so quick, and adopting to them is quite best to make in the core engine itself

No, it isn't. That's pretty much the worst approach there is.

Quote:

- I can't imaging how a tiny-little thing as a shader (shading scheme of a surface, at the end) can adopt itself to something like predicative-rendering for example.

As I said, please read up on the system again before making incorrect assumptions. We are not talking about a simple surface description here !

Quote:

It will be needed to (a) redesigning some subsystems of the rendering engine,

That's exactly what the micro render cores do. Divide and conquer - you add features as they come in. Oh look, I read about this new displacement mapping shader in a research paper a few days ago. I would just write it as a plugin, compile it to a DLL, and copy it into my engines plugin directory. And voilà, that's it. Even if that shader would completely modify the standard render pipeline - because in my approach, there is no standard pipeline !

By avoiding to touch the core, you also avoid breaking other parts of your code as you add new features. You don't need knowledge about the engine internals either, everything runs over standarized interfaces. So new effects (even those that would require a substancial modification of the render pipeline in your system) can be added without hassle, by several different people, or be contributed by third parties.

So, plugins are a perfect middle way between old school unflexible pipelines, and the complete abstraction of the rendering system into meta shaders. Once we have well working meta shaders (we will probably need hardware supported JIT compilers for that), we can just trash the plugin approach. And I'll be happy about it, because the system has in fact several drawbacks. Just not the ones you were thinking of :)


#25 python_regious   Members   -  Reputation: 929

Like
0Likes
Like

Posted 17 September 2005 - 03:37 PM

Lol, I hate this, after about a year of developing I'm close to completing my next incarnation of shader system, and I am being/have been convinved I've taken the wrong course of action. Again. Yay! [wink].
If at first you don't succeed, redefine success.

#26 Basiror   Members   -  Reputation: 241

Like
0Likes
Like

Posted 17 September 2005 - 09:09 PM

@Zemedelec:

ok lets say you have a terrain mesh with 4 texture layers and a density map for grass

inthe shader description you could add the density description

shader someterrainshader
{
grassdensity "xyz.raw" //just a perlin noise with several octaves and a
//exponential filter applied

cvar "gfx_drawgrass 1";//at loadtime a cvar test is performed to see if grass
// rendering is enabled at all
}
now as i mentioned in one of my earliers posts here the engine had to offer a api so the shader.dlls can interact with object hierarchy and maybe add new objects on demand


in a preprocess the shader would be called with the terrain batch
it retrieves the grassdensity .raw file sets up a new mesh or renderable object
places all the information about the grass quads or however you render your grass into this renderable object

i think a common way to render grass is to create a vbo + some density factor for scaling the quads so if the density is too low it simply renders a
zero square quad which is skipped by the OGL or D3D implementation

so the renderable object would look like this

renderable grass
{
static VBO id;//a single set of quad representations for all grass batches
density map
}

and this renderable object is put into the hierarchy scenegraph/octree


that way and you keep it open for the modder to define the appearance of the grass, the quad density per grass batch ......

as you see you describe the appearance of a pure terrain batch


and thats the advantage of this system there is no need to store this information anywhere in a map file
the preprocess can quickly be performed at loadtime

the only thing that takes some time is the implementation of a api that allows you dlls to interact with the engine core to setup or manipulate existing objects


there s a streaming concept with delayed evaluation, i know this from scheme:
- what it does is evaluating a expression at runtime to supply the caller with the desired data/information

thats compareable with this shader plugin for grass, you only create the information if the caller (the renderer) needs it
(see the cvar included in the example above)




#27 LarsMiddendorf   Members   -  Reputation: 122

Like
0Likes
Like

Posted 18 September 2005 - 04:08 AM

So, plugins are a perfect middle way between old school unflexible pipelines, and the complete abstraction of the rendering system into meta shaders. Once we have well working meta shaders (we will probably need hardware supported JIT compilers for that), we can just trash the plugin approach. And I'll be happy about it, because the system has in fact several drawbacks. Just not the ones you were thinking of :)


How will your meta shaders look like ?

#28 McZ   Members   -  Reputation: 139

Like
0Likes
Like

Posted 18 September 2005 - 09:25 PM

I have read the "Material/Shader Implementation" thread but I still can't understand how the shaders can control how many passes is needed and what to do in each pass and how to blend the passes together, and how to render a depthmap for shadows, or render to stencilbuffer for stencilbuffer-shadows

I modified the implementation a bit but I can't figure out how to create several passes and how to implement shadowmaps..

Besides that I really like the idea, I will like it more when my implementation of it works better ;)

#29 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 18 September 2005 - 09:54 PM

Quote:
Original post by Basiror
shader someterrainshader
{
grassdensity "xyz.raw" //just a perlin noise with several octaves and a
//exponential filter applied

cvar "gfx_drawgrass 1";//at loadtime a cvar test is performed to see if grass
// rendering is enabled at all
}

First of all - why *some* grass density is placed in the shader, to start with?
This is per-scene data.
I'm not trying to be anal, but you keep to specify local parameters in the shader definitions all the time.. :)

Quote:
Original post by Basiror
now as i mentioned in one of my earliers posts here the engine had to offer a api so the shader.dlls can interact with object hierarchy and maybe add new objects on demand

Yes, I understand that without that, "shaders" can't do much to change the world.
But, AFAIR, they have just Init/Shutdown interface, and rendering interface.
Grass needs some more, you know:
- a way to spawn nodes, and create them in runtime, while we move around.
- a way to destroy old nodes, that are far away.
- a way to expose the grass for some interaction - adding new grass during the gameplay, altering the grass by physics, etc.
So, it must expose quite a fat interface, for interaction - grass typically isn't something static and prebuild (it can be obviously, but this is just *some* grass), it has complex management, that also depends on the viewer position. Which is far far away from "shader", and maybe a bit far from "visual appearance" too, since you know - walking over grass can emit specific sounds for example... :)


Quote:
Original post by Basiror
and this renderable object is put into the hierarchy scenegraph/octree

and thats the advantage of this system there is no need to store this information anywhere in a map file
the preprocess can quickly be performed at loadtime

The more you preprocess at load-time, the more away you go from consoles.
I said it before, but will say it again - if your system design forces you to do things that can be preprocessed at load-time, then you must have a good reason to do so, very good reason.
Doom 3 for example, do that to be more artist/level-designer friendly, and its loadtimes are rediculous at times, but bearable (for PC only).
They have their reason.
Do you?

Quote:
Original post by Basiror
there s a streaming concept with delayed evaluation, i know this from scheme:
- what it does is evaluating a expression at runtime to supply the caller with the desired data/information
thats compareable with this shader plugin for grass, you only create the information if the caller (the renderer) needs it
(see the cvar included in the example above)


My most significant misunderstanding is: grass/shader/tiny-little-local-plugged thingy can't know better compared to whole scene graph, about when to render, when to preprocess, when to switch geometry LOD, shader LOD, how to control its quality, etc.
So, SG must post messages to the grass-plugin, for that grass plugin to react correctly and tune itself for the needs of the renderer.
That's quite an interface, I'l say. Show me an example of it, I can't find even a bit in the original and subsequent threads...
Because I'm very interested.

#30 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 18 September 2005 - 11:22 PM

Quote:
Original post by Yann L


I have read the original thread again, there were things I forgot. Have read my questions back then too, some of them still unanswered...

I totally agree that everything is possible, when the "shaders" implemented as plugings, can plug to many
points of the art pipeline, to the point of preprocessing geometry, even many shaders sharing same geometry pieces on disk (per-component, if they requirements
intersect), to change the rendering process and pipeline completely - to the point of interacting with HSR-process, adding new geometry to the scene (and for
physics), adding new properties/GUI/code to the editor. Then everything is possible.
I can't evaluate the complexity of such a system - never seen/hear of one (even in action), nor just barely thought to design such.
It is just way above my current design level... :(

But such a plugin system is far from shader-DLL terminology imo. This is clarification for all these people, who dive into this approach, didn't knowing
they make mini-engine-plugins, not shaders... ;)

And, by the way (I was off this forum for a long time lately) - have anybody implemented successfully such a
system, you described, and used it with success? I just don't know of any and I am very interested to hear that it was made by somebody else and works ok...
Where are all the fans, who participated in the famous "Material/Shader..." thread? Only python_regious is here.
I'd like to hear what they have done, it would be interesing approval of this design.


Now, my concerns, in summary:

Each system is born around some idea, and that idea limits the resulting system and gives it wings, at same time.
The idea of pluggable render-features, is to describe abstractly pieces of space, then resolve that describtion into set of registered interfaces, that can
create/render the thing.
On a primitive level, we can construct some shading techniques, and later combine them for more complex effects (lets forget for little, about grass/light-shafts).

So, my concerns:

1. We have fixed shaders/shading procedures -> abstract effect decomposition means more shaders, to the very low level having more ps/vs-s for execution of the effect.
Diffuse + Specular + Bump for example - if they are distinct effect, combining them you lead to more that 1 pass.
Of course we will have that into single shader - well... that's more work, to create all that combinations. And that's a thing that almost every current
engine tries/succeeds in avoiding.
The example is quite plain, but the idea is - building shaders from text, then compiling the result depending on what we want from the shader will get better ps/vs decomposition for example,
compared to the decomposition from already compiled shaders.

2. How shaders adopt to changing conditions - lights, fog, scene lighting (day <-> night can change lighting model a bit). Changing visual mode (night-vision, etc.)
I read the explanation, about the RPASS_PRELIGHT. But how that pass executes actual lighting, who pick-up the right set of shaders -
the same decomposition logic? But the lighting conditions can be vary a lot - from single shadowmapped directional + 3-4 points, to many point+spot lights and
some diffuse cbm-s at night.
Again - I see many passes or many work to implement that on pluggable system.
Text-composing system will handle that quite nicely and easy.

3. Next, how single piece of geometry (single piece - visually, not in memory) is rendered with multiple shaders - be it lighting conditions, user-specified parameters
like object being on-fire, damaged, transparent, etc. - i.e. gameplay changes requiring shader change.
How is the geometry used between shaders?
Because at heart this system relies on every single piece of pluggable rendering-feature (aka "shader") to prebuild its geometry for itself.
But geometry needs to be rendered by many shaders - how they share it, or they don't?
Streams?

4. You said, that adding new functionality, like grass, light-shafts, water - is possible, right? How your plugins interact with physics and gameplay-side
of the application, what interface is there, to allow that?
Falling objects into the water, can produce waves round them - is that possible with that approach?
And how objects are managed (when far-away, to fade, to switch LOD, to be recreated when comes near the viewer, etc.)
I saw only caching, based on visibility and possible shader can precache the LOD needed its filling cache procedure. But game-engine needs more, way more.
I suppose there are functions that monitor each piece of "shader" and let it adopt to the scene...? Like invalidating every N-milliseconds for particles for example,
and killing particles that are not in view for M-milliseconds...

5. How can artist, preview the results of their work - they create a geometry, assign some properties and want to see the result - how it is done?
How fast it will be, and how accurate they can tune it?
Because if we talk about competitivity, here it is.

6. How precise we can control the quality of the shading system on different hardware? For example, switching off gloss/reflectivity on terrain, but keeping it on tanks on 8500?
Here the logic - we tune shaders, based on our knowledge of vision of landscape and tanks (tanks are quite important, in example). I mean - this is decidion,
based on our game world - and we don't want to let the shader resolving system to dictate the fallbacks, but our artists.
How (easy) this can be done in your system, maybe some practical approach exists?

7. Effect description is given outside, and once for an object - how can we tune our shader, to accomodate for *very* different shading scenatios -
sunlight outside, and projected sunbeams inside buildings for example - only SG knows where to apply each - how this knowledge will result in proper
shading? This is rather pipeline example, cos it will include inside/outside volumes, projection textures and some logic involved in renderer, maybe custom
for given game.

8. How are RT-s shared between shaders? Because the shaders (again, because of the Idea :)), has very egocentric view of the resources (unless this is solved
from a system above them) - I'm interested in who allocate global shadowmap for PSM-shadowmap for example, and who computes the matrix for it.
It will be probably later used by many, many shaders at will, right?
Shadowmap RPASS_ is called, but who from all these shaders will compute the matrix? Or this is the Tools responsibility? If so, how can we introduce new shadowmap
technique, or even more than one, with conditional use of the best of them (based on view), for example?

9. Shadow volumes - who build them, and how? Can they be preprocessed, and stored with the data....? Same concern, as above, with the exception, that shadow
volumes need really more data to be preprocessed and stored (for static lighting, using them), and can really involve some heavy processing on whole pieces of level
- how this is connected with shaders/rendering-features?
How your system will adopt to scheme, where we don't render cbm for every object, but instead use nearest one, precomputed cbm, and we have many of them
spread through the level (HL2-style)?

10. Can these plugins, be plugged into editor, art-pipeline also, to process/create streaming content? If they can, what is just basic idea of the interface, and how
they share that data (or communicate each with other, to form the final data layout of the level)?

11. You said, in original thread: "By avoiding to touch the core, you also avoid breaking other parts of your code as you add new features. You don't need knowledge about the engine
internals either, everything runs over standarized interfaces."
So - can or can't you change rendering pipeline, so radically as introducing predicative-rendering? And what are these standartized interfaces, that allow you
to do so?

12. Shaders drops from being used by the system, if they can't be rendered on current hardware. But some shaders have multiple passes, like the shader, used in
example of reflective-refractive water. A valid fallback could be to drop refraction for example. How we do that, with a system, where the whole shader will
be dropped, together with its passes (because it is registered in the system as a whole piece)?
It is more work, to provide shaders, that can fallback one or two aspects of the top-level shader.
I mean - a text-based shader system can solve that quite naturally.

So, I just want to clarify that system, because as it seems it is quite more complex and versatile, that is shown in original Material/Shader thread, if it can override rendering pipeline like you said.
Thanks, if you even read this to the end :)

#31 hanstt   Members   -  Reputation: 259

Like
0Likes
Like

Posted 18 September 2005 - 11:56 PM

Argh, trying to kill people? [smile] I'll reply to some of what you said...


I didn't post a lot in the other thread mostly because I didn't care to 100% how Yann did things (where's the fun in programming if all is given to you?), but I liked the idea. I stripped off a few things, designed some things in another way etc but I got a renderer and shader system that pretty much implements all the concepts of the original approach.

It works beautifully.
What I have right now is this:

Content creation:
*) Models are assigned materials in whatever way the 3D modeler allows.
*) Exporters encode material information into a visual description for materials.

Runtime:
*) Material plugin loads materials which in turn request best fit shaders.
*) The renderer groups shaders and splits shader work into stages and passes.
*) Render queue is traversed, shaders are enabled/disabled etc.

Example of a shader that I added: transparency. This seems to be problematic considering how often it's addressed in these forums. The general solution is to add a separate render queue that's traversed after the main geometry which alters the renderer. What I did was I wrote a shader in a few minutes that correctly sorted transparent objects without ever touching the renderer. The renderer doesn't ever know there is a pre-Z pass, or maybe multiple passes for lighting using shadow volumes, or post-processing effects. Why is this better?

It's better in the way that any effects can be plugged into the renderer. Let's say I've written shaders for shadow volumes and 3 released games use that. Suddenly, I figure that shadow maps would be lots neater and a switch like that in 3 games would be lethal. I'm pretty sure, say, Doom 3 uses a rather static renderer, so a switch like that by iD is never gonna happen.
Using my little renderer design, I could write a couple of shaders using shadow maps, compile it into a DLL (or SO) and distribute it. Voila, all 3 games now got neater shadows without ever forcing any game developer to recompile and redistribute a single thing!


Zemedelec, you are right that one cannot "add" things that do no exist. For example, grass. There's no way we could write a shader that automagically adds grass everywhere appropriate if the game was never intended to have grass. The developer must give out new data specifying where the grass would be, but this is not about code. Pluggable shaders give us the ability to implement new effects anytime, but that's it. Fully procedural data can be generated by shaders (shadows, stars, lights etc), but not things that must be specified by an artist (volumetric fog, grass yadayada).


It's all about making it easier for the end developer to get updates to the graphics and that's it. Nothing stops, say, artist-created shaders, write a specific shader that takes care of those and you got artist-created shaders [smile]


I'll leave the more implementation-dependent things to somebody else...


EDIT: I'll answer a few of those riddles you got there:

1) How this is handled depends on the shaders, not the overall shader system. The group and split approach I used will share resources and cache states very well and minor misses in shader program code will in comparison be nil.

2) Let's say one enables nightvision. I dunno how nightvision works so I can't say anything about lighting, so let's say my first try would be pretty bad. A month later, I've read up on nightvision and devised a new algo => a couple of new shader and we're done. If the change is radical, scripts won't do.
Switching between nightvision on/off is just as tricky in script- as in DLL-based renderers, so scripts have no advantage over DLLs.
Please somebody, improve this answer...

3) I don't get this bit, objects have materials that can hold information about what the objects look like and proper shaders will be assigned to all materials. Shaders do what they have to do with the objects they are called with and that's it.

4) As mentioned, one cannot add new features at that level. To have interactive water, one would need to create a water object type. Luckily, since that part is pluggable in my engine too, that's not a problem [wink]

5) Depends on the engine, not the shader system.

6) Shader LODding? Due to how the shader system works, one could even let bits of the terrain use high quality shaders instead of assigning one boring shader to it all.

7) Sunbeams are volumetric entities that are not on surfaces. Sunbeams should be added as a new object type (yay for pluggable system!).

8) Shaders don't own information, they only process objects. RTs are generally assigned to the objects that cause the RTs, for example lights cause shadow maps so they own 'em.

9) One solution: let shaders do that. All renderables in my engine have a "shader bag" in which any shader can store some info. If you want shadow volumes, let the shader use some system to cache it (plugin of course) and render.
Preprocessed probes; new shader with that feature? Takes a couple of minutes to write...

10) Shaders shouldn't stream data, they only know what to do with geometry to make it look good. Streaming should be issued at a higher level when the viewing frustum moves around and available geometry wanted for rendering should be queued in the renderer.

11) Read the first part of my post, this design is more flexible than any hardcoded script-based renderer.

12) The beauty in this case is that using DLLs, we can control precisely what features we want to trash. Script-based shaders will do the renderer default without letting the artist mess around a little. Another point for the DLL approach!

[Edited by - coelurus on September 19, 2005 6:56:02 AM]

#32 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 19 September 2005 - 12:20 AM

Quote:
Original post by coelurus
Argh, trying to kill people? [smile] I'll reply to some of what you said...


Actually... trying to save people ;)

Quote:
Original post by coelurus
Content creation:
*) Models are assigned materials in whatever way the 3D modeler allows.
*) Exporters encode material information into a visual description for materials.

How artists specify shading of their art? Through general modeling package interface, or through your own? I mean - what level of abstraction do they use, to describe it - "spec_power = diffuse.alpha, bump+reflection+gloss" or "bumpy-glossy-reflective"?
Does it meet their expectations?

Quote:
Original post by coelurus
Runtime:
*) Material plugin loads materials which in turn request best fit shaders.

Is there quick preview? How you control quality on different video hardware?

Quote:

Example of a shader that I added: transparency.

I wholeheartly agree, that part of rendering pipeline can be offload to pluggable code, but this is pretty trivial if it comes to sorting rendering operations.
What I don't understand, how you can make the switch from shadow volumes, to shadowmaps so easily?
Disabling all shadow volume code is easy (override shader), but if they have tons of static shadow volumes, stored in the level? Ok, shader don't load it.
(If the switch was backward, I'll wonder who will generate this data for you, but anyway... :).
This basically means, you have distinct shadow pass, so you really have the freedom to fill it as you wish.
And the limitation, to being unable to shadow & render object in single pass.
It is rather dictated by the somewhat limiting assumption, that we need to build shading information in a RT, to shade our world.
If this is wanted - I agree.

Quote:

It's better in the way that any effects can be plugged into the renderer. Let's say I've written shaders for shadow volumes and 3 released games use that. Suddenly, I figure that shadow maps would be lots neater and a switch like that in 3 games would be lethal.


Side note: you are working for no money here... :)
If you don't plan to attract customers by the upgraded graphics, which I can hardly believe... :)

Quote:

I'm pretty sure, say, Doom 3 uses a rather static renderer, so a switch like that by iD is never gonna happen.

They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)

Quote:

Using my little renderer design, I could write a couple of shaders using shadow maps, compile it into a DLL (or SO) and distribute it. Voila, all 3 games now got neater shadows without ever forcing any game developer to recompile and redistribute a single thing!


We talked about competitivity. This is, when you can create games quickly, they are stable and production pipeline is really strong.
Introducing new shaders/shading methods to pipeline can be very nicely done without any plugin architecture, imho.
Keeping artists happy is more important/competitive, that keeping some programmers away from installing engine source to their computers... :)

Quote:

There's no way we could write a shader that automagically adds grass everywhere appropriate if the game was never intended to have grass.

Yann said, it is possible to add render features, which say fog is really is.
I'd like to hear, what is the interface for that plugins.
Because it is doable, I suppose.

Thanks for the response, very appreciated! :)

#33 hanstt   Members   -  Reputation: 259

Like
0Likes
Like

Posted 19 September 2005 - 12:40 AM

Sorry for not quoting, it takes too much time and I got studies to take care of :)

---
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?

---
Quick preview: call 'pluginShaderDB->rebuild()', wait a couple of fractions of a second and the shader database is done. That's a rather quick preview I'd say.

---
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?

SV -> SM: Shaders originally computed and cached SVs for objects, rendered to the stencil buffer now and then and filled the frame buffer. I scrap all that code and there will be no SVs anywhere. I add shaders using SMs which request preprocess shaders for RTs which are assigned to the lights and later used to map onto geometry. Done!

SM -> SV: As above but reversed [wink]

---
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?

---
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.

---
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.

---
Render features can be added anytime, that including grass, human skin, fog, water, clouds etc, but observe that this is extremely different from actually using the features in scenes everywhere appropriate! If the artist has made something that should look like water, the proper shader for water will be used, but that doesn't give the world a water object.

We're talking about an extendable renderer, not an extendable world, think a little about that.

#34 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 19 September 2005 - 04:08 AM

Quote:
Original post by coelurus
1) How this is handled depends on the shaders, not the overall shader system. The group and split approach I used will share resources and cache states very well and minor misses in shader program code will in comparison be nil.


Caching shared resources is very easy and doable.
But this very system, is build *on* the assumption of *decomposing* effect into set of *prebuild* shaders.
So, I can't see how this will prevent the system, to decompose some effects into more passes than needed, of course if you fail to provide ALL smaller shader combinations.

Quote:
Original post by coelurus
2) Let's say one enables nightvision.

No, wait, you missed the point.
It was - how you change the shading method of objects?

Quote:
Original post by coelurus
3) I don't get this bit, objects have materials that can hold information about what the objects look like and proper shaders will be assigned to all materials.

Shaders are assigned single time, but rendering conditions are not just what artist described, they are also affected by the dynamic game world. Lights, changed fog, day/night cycle - all that change the overall lighting scheme.
How your system handles that?

Quote:
Original post by coelurus
5) Depends on the engine, not the shader system.

Common.
Shader system shades objects.
Artist want to see the final result, rendered by your shading system. No more, no less.

Quote:
Original post by coelurus
6) Shader LODding? Due to how the shader system works, one could even let bits of the terrain use high quality shaders instead of assigning one boring shader to it all.

To clear myself again - you describe shading like something abstract, not tied to the game objects.
And I want fallback, that depend on game objects.
How we get different fallbacks, for shader, that is "bumped diffuse glossy specular". Create two shaders?

Quote:
Original post by coelurus
7) Sunbeams are volumetric entities that are not on surfaces. Sunbeams should be added as a new object type (yay for pluggable system!).

I understand your system, you have pluggable shaders.
Yann said, he have pluggable render features, and that can plug something like that too... and I was just curious... :)
But.. I see my concern (7) - you missed the point.
It was - we encode some lighting scheme in our scene graph, that introduces different lighting schemes - how they are passed to the objects, and how we manage to run different shaders for them?

Quote:
Original post by coelurus
8) Shaders don't own information, they only process objects. RTs are generally assigned to the objects that cause the RTs, for example lights cause shadow maps so they own 'em.

Your sun owns shadowmap, ok.
So, you have an object sun, that use shader to cast shadow via shadowmap...?!
How your sun computes shadowmap matrix?
And how shaders, that rely on it, get that matrix? Some general registry?

Quote:
Original post by coelurus
9) One solution: let shaders do that.

So, your system is extendable to the point, you can add objects in your editor, let them compute cbm-s in preprocess, and then query for nearest such cbm, and use it in the shader...?
Are you sure? :)

Quote:
Original post by coelurus
10) Shaders shouldn't stream data, they only know what to do with geometry to make it look good. Streaming should be issued at a higher level when the viewing frustum moves around and available geometry wanted for rendering should be queued in the renderer.

Ok, lets say your shaders can't load inplace then...

Quote:
Original post by coelurus
11) Read the first part of my post, this design is more flexible than any hardcoded script-based renderer.

Of what hardcoded script-based renderer are you speaking?

Quote:
Original post by coelurus
12) The beauty in this case is that using DLLs, we can control precisely what features we want to trash. Script-based shaders will do the renderer default without letting the artist mess around a little. Another point for the DLL approach!

Let me explain again - shaders, that have multipass nature, where passes aren't exported to the resolving system, are integral and whole.
If one of the passes fails - whole shader get thrown away.
No fallback control, over single internal pass.
My script-based system, is tuned, so for every family of shaders (single source file) we can pins, that control visually just every aspect of the shading.
We can turn then on/off, one by one, depending on fallback we want.

#35 Zemedelec   Members   -  Reputation: 229

Like
0Likes
Like

Posted 19 September 2005 - 04:19 AM

Just for the record - how much space there is on the gamedev.net servers...? :)

Quote:
Original post by coelurus
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?

You can create graphical materials.
Can your artists create graphical materials..?

Quote:
Original post by coelurus
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?

Shadow volumes are stored only for static geometry <-> static lights. See Doom 3, for example.

Quote:
Original post by coelurus
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?

The shader-DLL has nothing to do with visuals... :)
It is fun, but for a coder, your potential users can't see it anyway...

Quote:
Original post by coelurus
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.

There aren't any SM-s in Doom 3. In the next revision, they just add SM's, leaving shadow volumes there too, AFAIR.

Quote:
Original post by coelurus
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.

I think, there are some hardcoded rendering engine around there, that everybody see, but not me... I'm confused, sorry... :)
You can get new visual features, without such a system also, how it relates to the new visual features?
One more - your artists most probably want to modify and tune shading properties, not just use predefined ones (by modifying I mean not only base textures, but lighting formulas - for example scratch diffuse+ambient+specular, and create something more appropriate for given effect).

#36 hanstt   Members   -  Reputation: 259

Like
0Likes
Like

Posted 19 September 2005 - 05:10 AM

This leads nowhere, it's apparent that we use different solutions for rendering and the phenomenon of constant misinterpretation on the Internet kicks in everywhere [smile]

It would be a good idea to say a couple of final words on this from my part:

The shader system that has been under discussion is not simple from the viewport of the original developer and it can be very hard to grasp all of its capabilities. End developers can see their products automagically patched without any intervention from their side and artists will benefit from that as well.

The shader system is not almighty, it does its work only at runtime, but nobody has said that it's impossible to couple it with artist-friendly tools during content creation. Wysiwyg tools are totally independent on the renderer design at a high level.

A shader can be written to manage an entire renderer by itself, including a script-based one, but the opposite is not a very pretty story...

Anyway, that's 'nuff, I hope some of this discussion has been interesting to some people.

#37 zedzeek   Members   -  Reputation: 528

Like
0Likes
Like

Posted 19 September 2005 - 08:49 AM

Quote:
They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)

im guessing they do similar to what i do (something im proud of, a very easy to expand/change engine).
plugins are great for a lot things but i believe rendering engine is not one of them.
there will be future rendering techniques that wont work with a plugin

#38 python_regious   Members   -  Reputation: 929

Like
0Likes
Like

Posted 19 September 2005 - 02:49 PM

Quote:
Original post by zedzeek
Quote:
They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)

im guessing they do similar to what i do (something im proud of, a very easy to expand/change engine).
plugins are great for a lot things but i believe rendering engine is not one of them.
there will be future rendering techniques that wont work with a plugin


Care to expand on what you do?
If at first you don't succeed, redefine success.

#39 AdAvis   Members   -  Reputation: 518

Like
0Likes
Like

Posted 19 September 2005 - 03:10 PM

Quote:
Original post by coelurus
This leads nowhere, it's apparent that we use different solutions for rendering and the phenomenon of constant misinterpretation on the Internet kicks in everywhere [smile]


An unfortunate truth. However, instead of debating which is a better method, shouldn't we be discussing how to integrate the best of both systems?

Without having implemented a system like Yann's (or coelurus') I have several concerns similar to Zemedelec's with respect to the plugin based system. Perhaps I've misread or forgotten some of the points in the original plugin shader architecture thread, but I'll post my concerns anyway.

Say I'm implementing a plugin for general lighting equation. As Zemedelec asserted different geometry chunks are going to want to supply different (artist supplied) values for the equation's parameters. How would the GC pass these values to the appropriate plugin?

1) Subclassing geometry_chunk? This could work, a preprocessesing step could be taken to validate each GC with its assigned shader (if the assignment is invalid spawn a warning and assign a basic diffuse shader to the GC).

Sorry if this issue has already been answered, any thoughts would be appreciated.

[Edited by - mhamlin on September 19, 2005 10:10:16 PM]

#40 centipede   Members   -  Reputation: 304

Like
0Likes
Like

Posted 19 September 2005 - 08:21 PM

Quote:
Original post by mhamlin
Say I'm implementing a plugin for general lighting equation. As Zemedelec asserted different geometry chunks are going to want to supply different (artist supplied) values for the equation's parameters. How would the GC pass these values to the appropriate plugin?

If I understood correctly, shader plugin has shader_params() method, which takes current GeometryChunk as a parameter (and maybe also something else, I'm not sure about that). With this method, the shader picks all data it needs from the chunk, like material diffuse color, textures, etc.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS