• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Basiror

shader system implementation

45 posts in this topic

@Zemedelec:

ok lets say you have a terrain mesh with 4 texture layers and a density map for grass

inthe shader description you could add the density description

shader someterrainshader
{
grassdensity "xyz.raw" //just a perlin noise with several octaves and a
//exponential filter applied

cvar "gfx_drawgrass 1";//at loadtime a cvar test is performed to see if grass
// rendering is enabled at all
}
now as i mentioned in one of my earliers posts here the engine had to offer a api so the shader.dlls can interact with object hierarchy and maybe add new objects on demand


in a preprocess the shader would be called with the terrain batch
it retrieves the grassdensity .raw file sets up a new mesh or renderable object
places all the information about the grass quads or however you render your grass into this renderable object

i think a common way to render grass is to create a vbo + some density factor for scaling the quads so if the density is too low it simply renders a
zero square quad which is skipped by the OGL or D3D implementation

so the renderable object would look like this

renderable grass
{
static VBO id;//a single set of quad representations for all grass batches
density map
}

and this renderable object is put into the hierarchy scenegraph/octree


that way and you keep it open for the modder to define the appearance of the grass, the quad density per grass batch ......

as you see you describe the appearance of a pure terrain batch


and thats the advantage of this system there is no need to store this information anywhere in a map file
the preprocess can quickly be performed at loadtime

the only thing that takes some time is the implementation of a api that allows you dlls to interact with the engine core to setup or manipulate existing objects


there s a streaming concept with delayed evaluation, i know this from scheme:
- what it does is evaluating a expression at runtime to supply the caller with the desired data/information

thats compareable with this shader plugin for grass, you only create the information if the caller (the renderer) needs it
(see the cvar included in the example above)


0

Share this post


Link to post
Share on other sites
[QUOTE]So, plugins are a perfect middle way between old school unflexible pipelines, and the complete abstraction of the rendering system into meta shaders. Once we have well working meta shaders (we will probably need hardware supported JIT compilers for that), we can just trash the plugin approach. And I'll be happy about it, because the system has in fact several drawbacks. Just not the ones you were thinking of :)
[/QUOTE]

How will your meta shaders look like ?
0

Share this post


Link to post
Share on other sites
I have read the "Material/Shader Implementation" thread but I still can't understand how the shaders can control how many passes is needed and what to do in each pass and how to blend the passes together, and how to render a depthmap for shadows, or render to stencilbuffer for stencilbuffer-shadows

I modified the implementation a bit but I can't figure out how to create several passes and how to implement shadowmaps..

Besides that I really like the idea, I will like it more when my implementation of it works better ;)
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
shader someterrainshader
{
grassdensity "xyz.raw" //just a perlin noise with several octaves and a
//exponential filter applied

cvar "gfx_drawgrass 1";//at loadtime a cvar test is performed to see if grass
// rendering is enabled at all
}

First of all - why *some* grass density is placed in the shader, to start with?
This is per-scene data.
I'm not trying to be anal, but you keep to specify local parameters in the shader definitions all the time.. :)

Quote:
Original post by Basiror
now as i mentioned in one of my earliers posts here the engine had to offer a api so the shader.dlls can interact with object hierarchy and maybe add new objects on demand

Yes, I understand that without that, "shaders" can't do much to change the world.
But, AFAIR, they have just Init/Shutdown interface, and rendering interface.
Grass needs some more, you know:
- a way to spawn nodes, and create them in runtime, while we move around.
- a way to destroy old nodes, that are far away.
- a way to expose the grass for some interaction - adding new grass during the gameplay, altering the grass by physics, etc.
So, it must expose quite a fat interface, for interaction - grass typically isn't something static and prebuild (it can be obviously, but this is just *some* grass), it has complex management, that also depends on the viewer position. Which is far far away from "shader", and maybe a bit far from "visual appearance" too, since you know - walking over grass can emit specific sounds for example... :)


Quote:
Original post by Basiror
and this renderable object is put into the hierarchy scenegraph/octree

and thats the advantage of this system there is no need to store this information anywhere in a map file
the preprocess can quickly be performed at loadtime

The more you preprocess at load-time, the more away you go from consoles.
I said it before, but will say it again - if your system design forces you to do things that can be preprocessed at load-time, then you must have a good reason to do so, very good reason.
Doom 3 for example, do that to be more artist/level-designer friendly, and its loadtimes are rediculous at times, but bearable (for PC only).
They have their reason.
Do you?

Quote:
Original post by Basiror
there s a streaming concept with delayed evaluation, i know this from scheme:
- what it does is evaluating a expression at runtime to supply the caller with the desired data/information
thats compareable with this shader plugin for grass, you only create the information if the caller (the renderer) needs it
(see the cvar included in the example above)


My most significant misunderstanding is: grass/shader/tiny-little-local-plugged thingy can't know better compared to whole scene graph, about when to render, when to preprocess, when to switch geometry LOD, shader LOD, how to control its quality, etc.
So, SG must post messages to the grass-plugin, for that grass plugin to react correctly and tune itself for the needs of the renderer.
That's quite an interface, I'l say. Show me an example of it, I can't find even a bit in the original and subsequent threads...
Because I'm very interested.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L


I have read the original thread again, there were things I forgot. Have read my questions back then too, some of them still unanswered...

I totally agree that everything is possible, when the "shaders" implemented as plugings, can plug to many
points of the art pipeline, to the point of preprocessing geometry, even many shaders sharing same geometry pieces on disk (per-component, if they requirements
intersect), to change the rendering process and pipeline completely - to the point of interacting with HSR-process, adding new geometry to the scene (and for
physics), adding new properties/GUI/code to the editor. Then everything is possible.
I can't evaluate the complexity of such a system - never seen/hear of one (even in action), nor just barely thought to design such.
It is just way above my current design level... :(

But such a plugin system is far from shader-DLL terminology imo. This is clarification for all these people, who dive into this approach, didn't knowing
they make mini-engine-plugins, not shaders... ;)

And, by the way (I was off this forum for a long time lately) - have anybody implemented successfully such a
system, you described, and used it with success? I just don't know of any and I am very interested to hear that it was made by somebody else and works ok...
Where are all the fans, who participated in the famous "Material/Shader..." thread? Only python_regious is here.
I'd like to hear what they have done, it would be interesing approval of this design.


Now, my concerns, in summary:

Each system is born around some idea, and that idea limits the resulting system and gives it wings, at same time.
The idea of pluggable render-features, is to describe abstractly pieces of space, then resolve that describtion into set of registered interfaces, that can
create/render the thing.
On a primitive level, we can construct some shading techniques, and later combine them for more complex effects (lets forget for little, about grass/light-shafts).

So, my concerns:

1. We have fixed shaders/shading procedures -> abstract effect decomposition means more shaders, to the very low level having more ps/vs-s for execution of the effect.
Diffuse + Specular + Bump for example - if they are distinct effect, combining them you lead to more that 1 pass.
Of course we will have that into single shader - well... that's more work, to create all that combinations. And that's a thing that almost every current
engine tries/succeeds in avoiding.
The example is quite plain, but the idea is - building shaders from text, then compiling the result depending on what we want from the shader will get better ps/vs decomposition for example,
compared to the decomposition from already compiled shaders.

2. How shaders adopt to changing conditions - lights, fog, scene lighting (day <-> night can change lighting model a bit). Changing visual mode (night-vision, etc.)
I read the explanation, about the RPASS_PRELIGHT. But how that pass executes actual lighting, who pick-up the right set of shaders -
the same decomposition logic? But the lighting conditions can be vary a lot - from single shadowmapped directional + 3-4 points, to many point+spot lights and
some diffuse cbm-s at night.
Again - I see many passes or many work to implement that on pluggable system.
Text-composing system will handle that quite nicely and easy.

3. Next, how single piece of geometry (single piece - visually, not in memory) is rendered with multiple shaders - be it lighting conditions, user-specified parameters
like object being on-fire, damaged, transparent, etc. - i.e. gameplay changes requiring shader change.
How is the geometry used between shaders?
Because at heart this system relies on every single piece of pluggable rendering-feature (aka "shader") to prebuild its geometry for itself.
But geometry needs to be rendered by many shaders - how they share it, or they don't?
Streams?

4. You said, that adding new functionality, like grass, light-shafts, water - is possible, right? How your plugins interact with physics and gameplay-side
of the application, what interface is there, to allow that?
Falling objects into the water, can produce waves round them - is that possible with that approach?
And how objects are managed (when far-away, to fade, to switch LOD, to be recreated when comes near the viewer, etc.)
I saw only caching, based on visibility and possible shader can precache the LOD needed its filling cache procedure. But game-engine needs more, way more.
I suppose there are functions that monitor each piece of "shader" and let it adopt to the scene...? Like invalidating every N-milliseconds for particles for example,
and killing particles that are not in view for M-milliseconds...

5. How can artist, preview the results of their work - they create a geometry, assign some properties and want to see the result - how it is done?
How fast it will be, and how accurate they can tune it?
Because if we talk about competitivity, here it is.

6. How precise we can control the quality of the shading system on different hardware? For example, switching off gloss/reflectivity on terrain, but keeping it on tanks on 8500?
Here the logic - we tune shaders, based on our knowledge of vision of landscape and tanks (tanks are quite important, in example). I mean - this is decidion,
based on our game world - and we don't want to let the shader resolving system to dictate the fallbacks, but our artists.
How (easy) this can be done in your system, maybe some practical approach exists?

7. Effect description is given outside, and once for an object - how can we tune our shader, to accomodate for *very* different shading scenatios -
sunlight outside, and projected sunbeams inside buildings for example - only SG knows where to apply each - how this knowledge will result in proper
shading? This is rather pipeline example, cos it will include inside/outside volumes, projection textures and some logic involved in renderer, maybe custom
for given game.

8. How are RT-s shared between shaders? Because the shaders (again, because of the Idea :)), has very egocentric view of the resources (unless this is solved
from a system above them) - I'm interested in who allocate global shadowmap for PSM-shadowmap for example, and who computes the matrix for it.
It will be probably later used by many, many shaders at will, right?
Shadowmap RPASS_ is called, but who from all these shaders will compute the matrix? Or this is the Tools responsibility? If so, how can we introduce new shadowmap
technique, or even more than one, with conditional use of the best of them (based on view), for example?

9. Shadow volumes - who build them, and how? Can they be preprocessed, and stored with the data....? Same concern, as above, with the exception, that shadow
volumes need really more data to be preprocessed and stored (for static lighting, using them), and can really involve some heavy processing on whole pieces of level
- how this is connected with shaders/rendering-features?
How your system will adopt to scheme, where we don't render cbm for every object, but instead use nearest one, precomputed cbm, and we have many of them
spread through the level (HL2-style)?

10. Can these plugins, be plugged into editor, art-pipeline also, to process/create streaming content? If they can, what is just basic idea of the interface, and how
they share that data (or communicate each with other, to form the final data layout of the level)?

11. You said, in original thread: "By avoiding to touch the core, you also avoid breaking other parts of your code as you add new features. You don't need knowledge about the engine
internals either, everything runs over standarized interfaces."
So - can or can't you change rendering pipeline, so radically as introducing predicative-rendering? And what are these standartized interfaces, that allow you
to do so?

12. Shaders drops from being used by the system, if they can't be rendered on current hardware. But some shaders have multiple passes, like the shader, used in
example of reflective-refractive water. A valid fallback could be to drop refraction for example. How we do that, with a system, where the whole shader will
be dropped, together with its passes (because it is registered in the system as a whole piece)?
It is more work, to provide shaders, that can fallback one or two aspects of the top-level shader.
I mean - a text-based shader system can solve that quite naturally.

So, I just want to clarify that system, because as it seems it is quite more complex and versatile, that is shown in original Material/Shader thread, if it can override rendering pipeline like you said.
Thanks, if you even read this to the end :)
0

Share this post


Link to post
Share on other sites
Argh, trying to kill people? [smile] I'll reply to some of what you said...


I didn't post a lot in the other thread mostly because I didn't care to 100% how Yann did things (where's the fun in programming if all is given to you?), but I liked the idea. I stripped off a few things, designed some things in another way etc but I got a renderer and shader system that pretty much implements all the concepts of the original approach.

It works beautifully.
What I have right now is this:

Content creation:
*) Models are assigned materials in whatever way the 3D modeler allows.
*) Exporters encode material information into a visual description for materials.

Runtime:
*) Material plugin loads materials which in turn request best fit shaders.
*) The renderer groups shaders and splits shader work into stages and passes.
*) Render queue is traversed, shaders are enabled/disabled etc.

Example of a shader that I added: transparency. This seems to be problematic considering how often it's addressed in these forums. The general solution is to add a separate render queue that's traversed after the main geometry which alters the renderer. What I did was I wrote a shader in a few minutes that correctly sorted transparent objects without ever touching the renderer. The renderer doesn't ever know there is a pre-Z pass, or maybe multiple passes for lighting using shadow volumes, or post-processing effects. Why is this better?

It's better in the way that any effects can be plugged into the renderer. Let's say I've written shaders for shadow volumes and 3 released games use that. Suddenly, I figure that shadow maps would be lots neater and a switch like that in 3 games would be lethal. I'm pretty sure, say, Doom 3 uses a rather static renderer, so a switch like that by iD is never gonna happen.
Using my little renderer design, I could write a couple of shaders using shadow maps, compile it into a DLL (or SO) and distribute it. Voila, all 3 games now got neater shadows without ever forcing any game developer to recompile and redistribute a single thing!


Zemedelec, you are right that one cannot "add" things that do no exist. For example, grass. There's no way we could write a shader that automagically adds grass everywhere appropriate if the game was never intended to have grass. The developer must give out new data specifying where the grass would be, but this is not about code. Pluggable shaders give us the ability to implement new effects anytime, but that's it. Fully procedural data can be generated by shaders (shadows, stars, lights etc), but not things that must be specified by an artist (volumetric fog, grass yadayada).


It's all about making it easier for the end developer to get updates to the graphics and that's it. Nothing stops, say, artist-created shaders, write a specific shader that takes care of those and you got artist-created shaders [smile]


I'll leave the more implementation-dependent things to somebody else...


EDIT: I'll answer a few of those riddles you got there:

1) How this is handled depends on the shaders, not the overall shader system. The group and split approach I used will share resources and cache states very well and minor misses in shader program code will in comparison be nil.

2) Let's say one enables nightvision. I dunno how nightvision works so I can't say anything about lighting, so let's say my first try would be pretty bad. A month later, I've read up on nightvision and devised a new algo => a couple of new shader and we're done. If the change is radical, scripts won't do.
Switching between nightvision on/off is just as tricky in script- as in DLL-based renderers, so scripts have no advantage over DLLs.
Please somebody, improve this answer...

3) I don't get this bit, objects have materials that can hold information about what the objects look like and proper shaders will be assigned to all materials. Shaders do what they have to do with the objects they are called with and that's it.

4) As mentioned, one cannot add new features at that level. To have interactive water, one would need to create a water object type. Luckily, since that part is pluggable in my engine too, that's not a problem [wink]

5) Depends on the engine, not the shader system.

6) Shader LODding? Due to how the shader system works, one could even let bits of the terrain use high quality shaders instead of assigning one boring shader to it all.

7) Sunbeams are volumetric entities that are not on surfaces. Sunbeams should be added as a new object type (yay for pluggable system!).

8) Shaders don't own information, they only process objects. RTs are generally assigned to the objects that cause the RTs, for example lights cause shadow maps so they own 'em.

9) One solution: let shaders do that. All renderables in my engine have a "shader bag" in which any shader can store some info. If you want shadow volumes, let the shader use some system to cache it (plugin of course) and render.
Preprocessed probes; new shader with that feature? Takes a couple of minutes to write...

10) Shaders shouldn't stream data, they only know what to do with geometry to make it look good. Streaming should be issued at a higher level when the viewing frustum moves around and available geometry wanted for rendering should be queued in the renderer.

11) Read the first part of my post, this design is more flexible than any hardcoded script-based renderer.

12) The beauty in this case is that using DLLs, we can control precisely what features we want to trash. Script-based shaders will do the renderer default without letting the artist mess around a little. Another point for the DLL approach!

[Edited by - coelurus on September 19, 2005 6:56:02 AM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by coelurus
Argh, trying to kill people? [smile] I'll reply to some of what you said...


Actually... trying to save people ;)

Quote:
Original post by coelurus
Content creation:
*) Models are assigned materials in whatever way the 3D modeler allows.
*) Exporters encode material information into a visual description for materials.

How artists specify shading of their art? Through general modeling package interface, or through your own? I mean - what level of abstraction do they use, to describe it - "spec_power = diffuse.alpha, bump+reflection+gloss" or "bumpy-glossy-reflective"?
Does it meet their expectations?

Quote:
Original post by coelurus
Runtime:
*) Material plugin loads materials which in turn request best fit shaders.

Is there quick preview? How you control quality on different video hardware?

Quote:

Example of a shader that I added: transparency.

I wholeheartly agree, that part of rendering pipeline can be offload to pluggable code, but this is pretty trivial if it comes to sorting rendering operations.
What I don't understand, how you can make the switch from shadow volumes, to shadowmaps so easily?
Disabling all shadow volume code is easy (override shader), but if they have tons of static shadow volumes, stored in the level? Ok, shader don't load it.
(If the switch was backward, I'll wonder who will generate this data for you, but anyway... :).
This basically means, you have distinct shadow pass, so you really have the freedom to fill it as you wish.
And the limitation, to being unable to shadow & render object in single pass.
It is rather dictated by the somewhat limiting assumption, that we need to build shading information in a RT, to shade our world.
If this is wanted - I agree.

Quote:

It's better in the way that any effects can be plugged into the renderer. Let's say I've written shaders for shadow volumes and 3 released games use that. Suddenly, I figure that shadow maps would be lots neater and a switch like that in 3 games would be lethal.


Side note: you are working for no money here... :)
If you don't plan to attract customers by the upgraded graphics, which I can hardly believe... :)

Quote:

I'm pretty sure, say, Doom 3 uses a rather static renderer, so a switch like that by iD is never gonna happen.

They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)

Quote:

Using my little renderer design, I could write a couple of shaders using shadow maps, compile it into a DLL (or SO) and distribute it. Voila, all 3 games now got neater shadows without ever forcing any game developer to recompile and redistribute a single thing!


We talked about competitivity. This is, when you can create games quickly, they are stable and production pipeline is really strong.
Introducing new shaders/shading methods to pipeline can be very nicely done without any plugin architecture, imho.
Keeping artists happy is more important/competitive, that keeping some programmers away from installing engine source to their computers... :)

Quote:

There's no way we could write a shader that automagically adds grass everywhere appropriate if the game was never intended to have grass.

Yann said, it is possible to add render features, which say fog is really is.
I'd like to hear, what is the interface for that plugins.
Because it is doable, I suppose.

Thanks for the response, very appreciated! :)
0

Share this post


Link to post
Share on other sites
Sorry for not quoting, it takes too much time and I got studies to take care of :)

---
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?

---
Quick preview: call 'pluginShaderDB->rebuild()', wait a couple of fractions of a second and the shader database is done. That's a rather quick preview I'd say.

---
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?

SV -> SM: Shaders originally computed and cached SVs for objects, rendered to the stencil buffer now and then and filled the frame buffer. I scrap all that code and there will be no SVs anywhere. I add shaders using SMs which request preprocess shaders for RTs which are assigned to the lights and later used to map onto geometry. Done!

SM -> SV: As above but reversed [wink]

---
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?

---
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.

---
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.

---
Render features can be added anytime, that including grass, human skin, fog, water, clouds etc, but observe that this is extremely different from actually using the features in scenes everywhere appropriate! If the artist has made something that should look like water, the proper shader for water will be used, but that doesn't give the world a water object.

We're talking about an extendable renderer, not an extendable world, think a little about that.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by coelurus
1) How this is handled depends on the shaders, not the overall shader system. The group and split approach I used will share resources and cache states very well and minor misses in shader program code will in comparison be nil.


Caching shared resources is very easy and doable.
But this very system, is build *on* the assumption of *decomposing* effect into set of *prebuild* shaders.
So, I can't see how this will prevent the system, to decompose some effects into more passes than needed, of course if you fail to provide ALL smaller shader combinations.

Quote:
Original post by coelurus
2) Let's say one enables nightvision.

No, wait, you missed the point.
It was - how you change the shading method of objects?

Quote:
Original post by coelurus
3) I don't get this bit, objects have materials that can hold information about what the objects look like and proper shaders will be assigned to all materials.

Shaders are assigned single time, but rendering conditions are not just what artist described, they are also affected by the dynamic game world. Lights, changed fog, day/night cycle - all that change the overall lighting scheme.
How your system handles that?

Quote:
Original post by coelurus
5) Depends on the engine, not the shader system.

Common.
Shader system shades objects.
Artist want to see the final result, rendered by your shading system. No more, no less.

Quote:
Original post by coelurus
6) Shader LODding? Due to how the shader system works, one could even let bits of the terrain use high quality shaders instead of assigning one boring shader to it all.

To clear myself again - you describe shading like something abstract, not tied to the game objects.
And I want fallback, that depend on game objects.
How we get different fallbacks, for shader, that is "bumped diffuse glossy specular". Create two shaders?

Quote:
Original post by coelurus
7) Sunbeams are volumetric entities that are not on surfaces. Sunbeams should be added as a new object type (yay for pluggable system!).

I understand your system, you have pluggable shaders.
Yann said, he have pluggable render features, and that can plug something like that too... and I was just curious... :)
But.. I see my concern (7) - you missed the point.
It was - we encode some lighting scheme in our scene graph, that introduces different lighting schemes - how they are passed to the objects, and how we manage to run different shaders for them?

Quote:
Original post by coelurus
8) Shaders don't own information, they only process objects. RTs are generally assigned to the objects that cause the RTs, for example lights cause shadow maps so they own 'em.

Your sun owns shadowmap, ok.
So, you have an object sun, that use shader to cast shadow via shadowmap...?!
How your sun computes shadowmap matrix?
And how shaders, that rely on it, get that matrix? Some general registry?

Quote:
Original post by coelurus
9) One solution: let shaders do that.

So, your system is extendable to the point, you can add objects in your editor, let them compute cbm-s in preprocess, and then query for nearest such cbm, and use it in the shader...?
Are you sure? :)

Quote:
Original post by coelurus
10) Shaders shouldn't stream data, they only know what to do with geometry to make it look good. Streaming should be issued at a higher level when the viewing frustum moves around and available geometry wanted for rendering should be queued in the renderer.

Ok, lets say your shaders can't load inplace then...

Quote:
Original post by coelurus
11) Read the first part of my post, this design is more flexible than any hardcoded script-based renderer.

Of what hardcoded script-based renderer are you speaking?

Quote:
Original post by coelurus
12) The beauty in this case is that using DLLs, we can control precisely what features we want to trash. Script-based shaders will do the renderer default without letting the artist mess around a little. Another point for the DLL approach!

Let me explain again - shaders, that have multipass nature, where passes aren't exported to the resolving system, are integral and whole.
If one of the passes fails - whole shader get thrown away.
No fallback control, over single internal pass.
My script-based system, is tuned, so for every family of shaders (single source file) we can pins, that control visually just every aspect of the shading.
We can turn then on/off, one by one, depending on fallback we want.
0

Share this post


Link to post
Share on other sites
Just for the record - how much space there is on the gamedev.net servers...? :)

Quote:
Original post by coelurus
Atm, I have no interface for setting materials, but I can't see how the renderer design under inspection stops me from creating any wysiwyg graphical material creation tools?

You can create graphical materials.
Can your artists create graphical materials..?

Quote:
Original post by coelurus
Do people store shadow volumes in levels? That sounds kinda dangerous, especially considering (in my case) that I could plug in procedural geometry for various parametric surfaces in levels. What good does precalced shadow volumes do then?

Shadow volumes are stored only for static geometry <-> static lights. See Doom 3, for example.

Quote:
Original post by coelurus
I do all this because it's fun. Where's the fun in writing a renderer everybody else has seen?

The shader-DLL has nothing to do with visuals... :)
It is fun, but for a coder, your potential users can't see it anyway...

Quote:
Original post by coelurus
Read the talk by Carmack about their next game? Read it, he explains a few things about the shadowing in Doom 3, one of them being that SMs were not the main shadowing technique for the entire world.

There aren't any SM-s in Doom 3. In the next revision, they just add SM's, leaving shadow volumes there too, AFAIR.

Quote:
Original post by coelurus
I don't see how the shader system we talk about decreases the abilities for the artists? I'd say they rise as artists will get newer and newer features all the time without bounds that are practically impossible to implement into hardcoded rendering engines, that including script-based ones.

I think, there are some hardcoded rendering engine around there, that everybody see, but not me... I'm confused, sorry... :)
You can get new visual features, without such a system also, how it relates to the new visual features?
One more - your artists most probably want to modify and tune shading properties, not just use predefined ones (by modifying I mean not only base textures, but lighting formulas - for example scratch diffuse+ambient+specular, and create something more appropriate for given effect).
0

Share this post


Link to post
Share on other sites
This leads nowhere, it's apparent that we use different solutions for rendering and the phenomenon of constant misinterpretation on the Internet kicks in everywhere [smile]

It would be a good idea to say a couple of final words on this from my part:

The shader system that has been under discussion is not simple from the viewport of the original developer and it can be very hard to grasp all of its capabilities. End developers can see their products automagically patched without any intervention from their side and artists will benefit from that as well.

The shader system is not almighty, it does its work only at runtime, but nobody has said that it's impossible to couple it with artist-friendly tools during content creation. Wysiwyg tools are totally independent on the renderer design at a high level.

A shader can be written to manage an entire renderer by itself, including a script-based one, but the opposite is not a very pretty story...

Anyway, that's 'nuff, I hope some of this discussion has been interesting to some people.
0

Share this post


Link to post
Share on other sites
Quote:
They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)

im guessing they do similar to what i do (something im proud of, a very easy to expand/change engine).
plugins are great for a lot things but i believe rendering engine is not one of them.
there will be future rendering techniques that wont work with a plugin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by zedzeek
Quote:
They integrated shadowmaps very easily, AFAIK. Without having any pluggable architecture, just having clean renderer design, I suppose... :)

im guessing they do similar to what i do (something im proud of, a very easy to expand/change engine).
plugins are great for a lot things but i believe rendering engine is not one of them.
there will be future rendering techniques that wont work with a plugin


Care to expand on what you do?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by coelurus
This leads nowhere, it's apparent that we use different solutions for rendering and the phenomenon of constant misinterpretation on the Internet kicks in everywhere [smile]


An unfortunate truth. However, instead of debating which is a better method, shouldn't we be discussing how to integrate the best of both systems?

Without having implemented a system like Yann's (or coelurus') I have several concerns similar to Zemedelec's with respect to the plugin based system. Perhaps I've misread or forgotten some of the points in the original plugin shader architecture thread, but I'll post my concerns anyway.

Say I'm implementing a plugin for general lighting equation. As Zemedelec asserted different geometry chunks are going to want to supply different (artist supplied) values for the equation's parameters. How would the GC pass these values to the appropriate plugin?

1) Subclassing geometry_chunk? This could work, a preprocessesing step could be taken to validate each GC with its assigned shader (if the assignment is invalid spawn a warning and assign a basic diffuse shader to the GC).

Sorry if this issue has already been answered, any thoughts would be appreciated.

[Edited by - mhamlin on September 19, 2005 10:10:16 PM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by mhamlin
Say I'm implementing a plugin for general lighting equation. As Zemedelec asserted different geometry chunks are going to want to supply different (artist supplied) values for the equation's parameters. How would the GC pass these values to the appropriate plugin?

If I understood correctly, shader plugin has shader_params() method, which takes current GeometryChunk as a parameter (and maybe also something else, I'm not sure about that). With this method, the shader picks all data it needs from the chunk, like material diffuse color, textures, etc.
0

Share this post


Link to post
Share on other sites
I would store lighting parameters in my materials and that info is passed every time an object is to be rendered pass by pass. That's it, right?

I had a look at the Project Offset videos to see some realtime action in a wysiwyg shader editor. In order to have total control over the rendering pipeline (this includes custom shadowing techniques, fully configurable rendering solutions such as doing a pre-Z pass, deferred shading etc), DLLed shaders will work beautifully and very little code will have to be generated. IIRC, Yann mentioned something about generating binary shader code and that approach would be ideal. Something that I'll definitely look further into, when I've finished some of the more important stuff...
0

Share this post


Link to post
Share on other sites
Quote:
Original post by coelurus
I would store lighting parameters in my materials and that info is passed every time an object is to be rendered pass by pass. That's it, right?

I had a look at the Project Offset videos to see some realtime action in a wysiwyg shader editor. In order to have total control over the rendering pipeline (this includes custom shadowing techniques, fully configurable rendering solutions such as doing a pre-Z pass, deferred shading etc), DLLed shaders will work beautifully and very little code will have to be generated. IIRC, Yann mentioned something about generating binary shader code and that approach would be ideal. Something that I'll definitely look further into, when I've finished some of the more important stuff...


That wasn't a very good example, I admit. You know, I was about to bring up another example, but I just realized a (rather simple) solution to having it fit well with a plugin system. To be honest, I'm kind of embarrassed.
0

Share this post


Link to post
Share on other sites
@Zemedelec:

1. please stop generalizing this the way one processes grass inside his engine is personal preference

i could create a density map for the whole scene or i could do this on a terrain batch base or inside a shader definition this doesn t matter at all


2. i don t need to create and destroy nodes the grass is represented as a vbo full of quads and you just apply a different texture to it so all you need to do is store a node for your grass batch that tells the engine that it exists and can be rendered if visible

3. i don t aim for consoles and there is no huge amount of preprocessing to do when creating a grass batch
- create a global VBO if it doesn t already exist
- add a grass batch node to the scenegraph
- start rendering done *this is simplified but shows how it works*

4. implementing such a api for content creation isn t much of a problem and it s certainly not very complexe
its probably as complex as the intermediate mode of modern graphic apis as OGL or D3D

5. it allows you to add detail on the fly without the need to develop tools inside a scene development environment that are hardly used so spares time

0

Share this post


Link to post
Share on other sites
Quote:
Original post by coelurus
I would store lighting parameters in my materials and that info is passed every time an object is to be rendered pass by pass. That's it, right?

But lighting is dynamic, and depends on the position in scene graph.
And material is something like property of a surface... how do you connect both?

Quote:
Original post by coelurus
I had a look at the Project Offset videos to see some realtime action in a wysiwyg shader editor.

Yes, I mentioned something like that in the beginning of the thread.
But this system is suited for text-generated shaders indeed, not DLL-based, if of course you don't implement shader nodes in these DLL-s, not complete shaders, so you can link nodes in a way you want.

And such a system is very interesting and I'd like to build one for our pipeline, in the simpliest way possible - text replacer, with light as multipass (eventually, if don't fit into current profile).
I can explain the first draft, if somebody is interested.

As for the text-based approach, here is one more point of view, very interesting, because it is decoupled from actual lighting:

http://www.talula.demon.co.uk/hlsl_fragments/hlsl_fragments.html
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Zemedelec
Quote:
Original post by coelurus
I would store lighting parameters in my materials and that info is passed every time an object is to be rendered pass by pass. That's it, right?

But lighting is dynamic, and depends on the position in scene graph.
And material is something like property of a surface... how do you connect both?


I just ask the light manager which lights influence a particular object in space the most. Works pretty well.


I would like to eventually write a graphical tool for shaders and I realize that a script-based renderer would be very easy to use for that. Still, only binary code can change a rendering pipeline totally if, say, we would like to implement tech that was not available at the time the renderer was written.
That's the fundamental difference; DLLs supply new code, scripts only control existing code in the renderer. Period. Note that there are no indications in any way of a definition of how the DLL:ed shaders work; I will most probably investigate methods for shaders to automatically compile at engine startup to fit the current host which means they will slide over to some sort of scripts. There still is an advantage using DLL:ed shaders, as mentioned in the previous paragraph.

Another site to bookmark, thanks [smile]
0

Share this post


Link to post
Share on other sites
Quote:
Care to expand on what you do?

im a great beliver in keeping it as simple as possible
everything is specified as text files materials/shader/shader info.
materials specify how an object looks like under various conditions
Quote:
http://www.talula.demon.co.uk/hlsl_fragments/hlsl_fragments.html

yeah this is similar to what i have to generate standard shaders (though some shaders are to specific to be handled easily by a generic ubershader)

Quote:
I just ask the light manager which lights influence a particular object in space the most
u cant do this ie ignore lights without it impacting visually on the scene
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0