Argh, trying to kill people? [smile] I'll reply to some of what you said...
I didn't post a lot in the other thread mostly because I didn't care to 100% how Yann did things (where's the fun in programming if all is given to you?), but I liked the idea. I stripped off a few things, designed some things in another way etc but I got a renderer and shader system that pretty much implements all the concepts of the original approach.
It works beautifully.
What I have right now is this:
Content creation:
*) Models are assigned materials in whatever way the 3D modeler allows.
*) Exporters encode material information into a visual description for materials.
Runtime:
*) Material plugin loads materials which in turn request best fit shaders.
*) The renderer groups shaders and splits shader work into stages and passes.
*) Render queue is traversed, shaders are enabled/disabled etc.
Example of a shader that I added: transparency. This seems to be problematic considering how often it's addressed in these forums. The general solution is to add a separate render queue that's traversed after the main geometry which alters the renderer. What I did was I wrote a shader in a few minutes that correctly sorted transparent objects without ever touching the renderer. The renderer doesn't ever know there is a pre-Z pass, or maybe multiple passes for lighting using shadow volumes, or post-processing effects. Why is this better?
It's better in the way that any effects can be plugged into the renderer. Let's say I've written shaders for shadow volumes and 3 released games use that. Suddenly, I figure that shadow maps would be lots neater and a switch like that in 3 games would be lethal. I'm pretty sure, say, Doom 3 uses a rather static renderer, so a switch like that by iD is never gonna happen.
Using my little renderer design, I could write a couple of shaders using shadow maps, compile it into a DLL (or SO) and distribute it. Voila, all 3 games now got neater shadows without ever forcing any game developer to recompile and redistribute a single thing!
Zemedelec, you are right that one cannot "add" things that do no exist. For example, grass. There's no way we could write a shader that automagically adds grass everywhere appropriate if the game was never intended to have grass. The developer must give out new data specifying where the grass would be, but this is not about code. Pluggable shaders give us the ability to implement new effects anytime, but that's it. Fully procedural data can be generated by shaders (shadows, stars, lights etc), but not things that must be specified by an artist (volumetric fog, grass yadayada).
It's all about making it easier for the end developer to get updates to the graphics and that's it. Nothing stops, say, artist-created shaders, write a specific shader that takes care of those and you got artist-created shaders [smile]
I'll leave the more implementation-dependent things to somebody else...
EDIT: I'll answer a few of those riddles you got there:
1) How this is handled depends on the shaders, not the overall shader system. The group and split approach I used will share resources and cache states very well and minor misses in shader program code will in comparison be nil.
2) Let's say one enables nightvision. I dunno how nightvision works so I can't say anything about lighting, so let's say my first try would be pretty bad. A month later, I've read up on nightvision and devised a new algo => a couple of new shader and we're done. If the change is radical, scripts won't do.
Switching between nightvision on/off is just as tricky in script- as in DLL-based renderers, so scripts have no advantage over DLLs.
Please somebody, improve this answer...
3) I don't get this bit, objects have materials that can hold information about what the objects look like and proper shaders will be assigned to all materials. Shaders do what they have to do with the objects they are called with and that's it.
4) As mentioned, one cannot add new features at that level. To have interactive water, one would need to create a water object type. Luckily, since that part is pluggable in my engine too, that's not a problem [wink]
5) Depends on the engine, not the shader system.
6) Shader LODding? Due to how the shader system works, one could even let bits of the terrain use high quality shaders instead of assigning one boring shader to it all.
7) Sunbeams are volumetric entities that are not on surfaces. Sunbeams should be added as a new object type (yay for pluggable system!).
8) Shaders don't own information, they only process objects. RTs are generally assigned to the objects that cause the RTs, for example lights cause shadow maps so they own 'em.
9) One solution: let shaders do that. All renderables in my engine have a "shader bag" in which any shader can store some info. If you want shadow volumes, let the shader use some system to cache it (plugin of course) and render.
Preprocessed probes; new shader with that feature? Takes a couple of minutes to write...
10) Shaders shouldn't stream data, they only know what to do with geometry to make it look good. Streaming should be issued at a higher level when the viewing frustum moves around and available geometry wanted for rendering should be queued in the renderer.
11) Read the first part of my post, this design is more flexible than any hardcoded script-based renderer.
12) The beauty in this case is that using DLLs, we can control precisely what features we want to trash. Script-based shaders will do the renderer default without letting the artist mess around a little. Another point for the DLL approach!
[Edited by - coelurus on September 19, 2005 6:56:02 AM]
shader system implementation
Quote:Original post by coelurus
Argh, trying to kill people? [smile] I'll reply to some of what you said...
Actually... trying to save people ;)
Quote:Original post by coelurus
Content creation:
*) Models are assigned materials in whatever way the 3D modeler allows.
*) Exporters encode material information into a visual description for materials.
How artists specify shading of their art? Through general modeling package interface, or through your own? I mean - what level of abstraction do they use, to describe it - "spec_power = diffuse.alpha, bump+reflection+gloss" or "bumpy-glossy-reflective"?
Does it meet their expectations?
Quote:Original post by coelurus
Runtime:
*) Material plugin loads materials which in turn request best fit shaders.
Is there quick preview? How you control quality on different video hardware?
Quote:
Example of a shader that I added: transparency.
I wholeheartly agree, that part of rendering pipeline can be offload to pluggable code, but this is pretty trivial if it comes to sorting rendering operations.
What I don't understand, how you can make the switch from shadow volumes, to shadowmaps so easily?
Disabling all shadow volume code is easy (override shader), but if they have tons of static shadow volumes, stored in the level? Ok, shader don't load it.
(If the switch was backward, I'll wonder who will generate this data for you, but anyway... :).
This basically means, you have distinct shadow pass, so you really have the freedom to fill it as you wish.
And the limitation, to being unable to shadow & render object in single pass.
It is rather dictated by the somewhat limiting assumption, that we need to build shading information in a RT, to shade our world.
If this is wanted - I agree.
Quote:
It's better in the way that any effects can be plugged into the renderer. Let's say I've written shaders for shadow volumes and 3 released games use that. Suddenly, I figure that shadow maps would be lots neater and a switch like that in 3 games would be lethal.
Side note: you are working for no money here... :)
If you don't plan to attract customers by the upgraded graphics, which I can hardly believe... :)
Quote:
I'm pretty sure, say, Doom 3 uses a rather static renderer, so a switch like that by iD is never gonna happen.
They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)
Quote:
Using my little renderer design, I could write a couple of shaders using shadow maps, compile it into a DLL (or SO) and distribute it. Voila, all 3 games now got neater shadows without ever forcing any game developer to recompile and redistribute a single thing!
We talked about competitivity. This is, when you can create games quickly, they are stable and production pipeline is really strong.
Introducing new shaders/shading methods to pipeline can be very nicely done without any plugin architecture, imho.
Keeping artists happy is more important/competitive, that keeping some programmers away from installing engine source to their computers... :)
Quote:
There's no way we could write a shader that automagically adds grass everywhere appropriate if the game was never intended to have grass.
Yann said, it is possible to add render features, which say fog is really is.
I'd like to hear, what is the interface for that plugins.
Because it is doable, I suppose.
Thanks for the response, very appreciated! :)
Sorry for not quoting, it takes too much time and I got studies to take care of :)
---
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?
---
Quick preview: call 'pluginShaderDB->rebuild()', wait a couple of fractions of a second and the shader database is done. That's a rather quick preview I'd say.
---
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?
SV -> SM: Shaders originally computed and cached SVs for objects, rendered to the stencil buffer now and then and filled the frame buffer. I scrap all that code and there will be no SVs anywhere. I add shaders using SMs which request preprocess shaders for RTs which are assigned to the lights and later used to map onto geometry. Done!
SM -> SV: As above but reversed [wink]
---
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?
---
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.
---
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.
---
Render features can be added anytime, that including grass, human skin, fog, water, clouds etc, but observe that this is extremely different from actually using the features in scenes everywhere appropriate! If the artist has made something that should look like water, the proper shader for water will be used, but that doesn't give the world a water object.
We're talking about an extendable renderer, not an extendable world, think a little about that.
---
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?
---
Quick preview: call 'pluginShaderDB->rebuild()', wait a couple of fractions of a second and the shader database is done. That's a rather quick preview I'd say.
---
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?
SV -> SM: Shaders originally computed and cached SVs for objects, rendered to the stencil buffer now and then and filled the frame buffer. I scrap all that code and there will be no SVs anywhere. I add shaders using SMs which request preprocess shaders for RTs which are assigned to the lights and later used to map onto geometry. Done!
SM -> SV: As above but reversed [wink]
---
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?
---
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.
---
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.
---
Render features can be added anytime, that including grass, human skin, fog, water, clouds etc, but observe that this is extremely different from actually using the features in scenes everywhere appropriate! If the artist has made something that should look like water, the proper shader for water will be used, but that doesn't give the world a water object.
We're talking about an extendable renderer, not an extendable world, think a little about that.
Quote:Original post by coelurus
1) How this is handled depends on the shaders, not the overall shader system. The group and split approach I used will share resources and cache states very well and minor misses in shader program code will in comparison be nil.
Caching shared resources is very easy and doable.
But this very system, is build *on* the assumption of *decomposing* effect into set of *prebuild* shaders.
So, I can't see how this will prevent the system, to decompose some effects into more passes than needed, of course if you fail to provide ALL smaller shader combinations.
Quote:Original post by coelurus
2) Let's say one enables nightvision.
No, wait, you missed the point.
It was - how you change the shading method of objects?
Quote:Original post by coelurus
3) I don't get this bit, objects have materials that can hold information about what the objects look like and proper shaders will be assigned to all materials.
Shaders are assigned single time, but rendering conditions are not just what artist described, they are also affected by the dynamic game world. Lights, changed fog, day/night cycle - all that change the overall lighting scheme.
How your system handles that?
Quote:Original post by coelurus
5) Depends on the engine, not the shader system.
Common.
Shader system shades objects.
Artist want to see the final result, rendered by your shading system. No more, no less.
Quote:Original post by coelurus
6) Shader LODding? Due to how the shader system works, one could even let bits of the terrain use high quality shaders instead of assigning one boring shader to it all.
To clear myself again - you describe shading like something abstract, not tied to the game objects.
And I want fallback, that depend on game objects.
How we get different fallbacks, for shader, that is "bumped diffuse glossy specular". Create two shaders?
Quote:Original post by coelurus
7) Sunbeams are volumetric entities that are not on surfaces. Sunbeams should be added as a new object type (yay for pluggable system!).
I understand your system, you have pluggable shaders.
Yann said, he have pluggable render features, and that can plug something like that too... and I was just curious... :)
But.. I see my concern (7) - you missed the point.
It was - we encode some lighting scheme in our scene graph, that introduces different lighting schemes - how they are passed to the objects, and how we manage to run different shaders for them?
Quote:Original post by coelurus
8) Shaders don't own information, they only process objects. RTs are generally assigned to the objects that cause the RTs, for example lights cause shadow maps so they own 'em.
Your sun owns shadowmap, ok.
So, you have an object sun, that use shader to cast shadow via shadowmap...?!
How your sun computes shadowmap matrix?
And how shaders, that rely on it, get that matrix? Some general registry?
Quote:Original post by coelurus
9) One solution: let shaders do that.
So, your system is extendable to the point, you can add objects in your editor, let them compute cbm-s in preprocess, and then query for nearest such cbm, and use it in the shader...?
Are you sure? :)
Quote:Original post by coelurus
10) Shaders shouldn't stream data, they only know what to do with geometry to make it look good. Streaming should be issued at a higher level when the viewing frustum moves around and available geometry wanted for rendering should be queued in the renderer.
Ok, lets say your shaders can't load inplace then...
Quote:Original post by coelurus
11) Read the first part of my post, this design is more flexible than any hardcoded script-based renderer.
Of what hardcoded script-based renderer are you speaking?
Quote:Original post by coelurus
12) The beauty in this case is that using DLLs, we can control precisely what features we want to trash. Script-based shaders will do the renderer default without letting the artist mess around a little. Another point for the DLL approach!
Let me explain again - shaders, that have multipass nature, where passes aren't exported to the resolving system, are integral and whole.
If one of the passes fails - whole shader get thrown away.
No fallback control, over single internal pass.
My script-based system, is tuned, so for every family of shaders (single source file) we can pins, that control visually just every aspect of the shading.
We can turn then on/off, one by one, depending on fallback we want.
Just for the record - how much space there is on the gamedev.net servers...? :)
You can create graphical materials.
Can your artists create graphical materials..?
Shadow volumes are stored only for static geometry <-> static lights. See Doom 3, for example.
The shader-DLL has nothing to do with visuals... :)
It is fun, but for a coder, your potential users can't see it anyway...
There aren't any SM-s in Doom 3. In the next revision, they just add SM's, leaving shadow volumes there too, AFAIR.
I think, there are some hardcoded rendering engine around there, that everybody see, but not me... I'm confused, sorry... :)
You can get new visual features, without such a system also, how it relates to the new visual features?
One more - your artists most probably want to modify and tune shading properties, not just use predefined ones (by modifying I mean not only base textures, but lighting formulas - for example scratch diffuse+ambient+specular, and create something more appropriate for given effect).
Quote:Original post by coelurus
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?
You can create graphical materials.
Can your artists create graphical materials..?
Quote:Original post by coelurus
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?
Shadow volumes are stored only for static geometry <-> static lights. See Doom 3, for example.
Quote:Original post by coelurus
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?
The shader-DLL has nothing to do with visuals... :)
It is fun, but for a coder, your potential users can't see it anyway...
Quote:Original post by coelurus
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.
There aren't any SM-s in Doom 3. In the next revision, they just add SM's, leaving shadow volumes there too, AFAIR.
Quote:Original post by coelurus
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.
I think, there are some hardcoded rendering engine around there, that everybody see, but not me... I'm confused, sorry... :)
You can get new visual features, without such a system also, how it relates to the new visual features?
One more - your artists most probably want to modify and tune shading properties, not just use predefined ones (by modifying I mean not only base textures, but lighting formulas - for example scratch diffuse+ambient+specular, and create something more appropriate for given effect).
This leads nowhere, it's apparent that we use different solutions for rendering and the phenomenon of constant misinterpretation on the Internet kicks in everywhere [smile]
It would be a good idea to say a couple of final words on this from my part:
The shader system that has been under discussion is not simple from the viewport of the original developer and it can be very hard to grasp all of its capabilities. End developers can see their products automagically patched without any intervention from their side and artists will benefit from that as well.
The shader system is not almighty, it does its work only at runtime, but nobody has said that it's impossible to couple it with artist-friendly tools during content creation. Wysiwyg tools are totally independent on the renderer design at a high level.
A shader can be written to manage an entire renderer by itself, including a script-based one, but the opposite is not a very pretty story...
Anyway, that's 'nuff, I hope some of this discussion has been interesting to some people.
It would be a good idea to say a couple of final words on this from my part:
The shader system that has been under discussion is not simple from the viewport of the original developer and it can be very hard to grasp all of its capabilities. End developers can see their products automagically patched without any intervention from their side and artists will benefit from that as well.
The shader system is not almighty, it does its work only at runtime, but nobody has said that it's impossible to couple it with artist-friendly tools during content creation. Wysiwyg tools are totally independent on the renderer design at a high level.
A shader can be written to manage an entire renderer by itself, including a script-based one, but the opposite is not a very pretty story...
Anyway, that's 'nuff, I hope some of this discussion has been interesting to some people.
Quote:They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)
im guessing they do similar to what i do (something im proud of, a very easy to expand/change engine).
plugins are great for a lot things but i believe rendering engine is not one of them.
there will be future rendering techniques that wont work with a plugin
Quote:Original post by zedzeekQuote:They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)
im guessing they do similar to what i do (something im proud of, a very easy to expand/change engine).
plugins are great for a lot things but i believe rendering engine is not one of them.
there will be future rendering techniques that wont work with a plugin
Care to expand on what you do?
Quote:Original post by coelurus
This leads nowhere, it's apparent that we use different solutions for rendering and the phenomenon of constant misinterpretation on the Internet kicks in everywhere [smile]
An unfortunate truth. However, instead of debating which is a better method, shouldn't we be discussing how to integrate the best of both systems?
Without having implemented a system like Yann's (or coelurus') I have several concerns similar to Zemedelec's with respect to the plugin based system. Perhaps I've misread or forgotten some of the points in the original plugin shader architecture thread, but I'll post my concerns anyway.
Say I'm implementing a plugin for general lighting equation. As Zemedelec asserted different geometry chunks are going to want to supply different (artist supplied) values for the equation's parameters. How would the GC pass these values to the appropriate plugin?
1) Subclassing geometry_chunk? This could work, a preprocessesing step could be taken to validate each GC with its assigned shader (if the assignment is invalid spawn a warning and assign a basic diffuse shader to the GC).
Sorry if this issue has already been answered, any thoughts would be appreciated.
[Edited by - mhamlin on September 19, 2005 10:10:16 PM]
Quote:Original post by mhamlin
Say I'm implementing a plugin for general lighting equation. As Zemedelec asserted different geometry chunks are going to want to supply different (artist supplied) values for the equation's parameters. How would the GC pass these values to the appropriate plugin?
If I understood correctly, shader plugin has shader_params() method, which takes current GeometryChunk as a parameter (and maybe also something else, I'm not sure about that). With this method, the shader picks all data it needs from the chunk, like material diffuse color, textures, etc.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement