What an incredibly stupid question! (DirectX and Shaders)

Started by
6 comments, last by Alerion 16 years, 9 months ago
I know, I ought be ashamed. But I have 2 books, and scoured the internet and all I can find are HLSL AUTHORING guides, and building .fx files, but scant information on the actual implementation. I understand what vertex and pixel shaders do, every tutorial is happy to inform me of all that information on where it fits in the rendering pipeline, and other such things. Great. Useful, but why do they always clam up right at the money shot? Ok so they even go as far as to explain what fx files are, and how to load them and apply them to a single model (usuallaly a triangle, sometimes a box, once it was a cube with it's normals flipped! fancy) But wait, that doesn't help! How do I use the durn things? From the way they make it seem, you gotta apply each effect on a per model basis? Like each model gets it's own effect? And then you loop around depending on how many passes it takes? What now? But what if I want to build a point light, how do I get everything in the scene to shade with this light? What if I have lots of lights, or bump mapping, or some other such effect? What if I'm applying it to a large level, where there will be lots of shaders flying around. I can't just apply every shader, I need to know which shaders are where. But everything should be effected by the shaders in the area. I mean, I know it's a dumb question, but I really have looked, I promise =] Can someone give the a rundown on the implementation, or point me in the direction of someone who can tell me? Multiple shaders of different types being applied to multiple models? It doesn't have to have source code or anything, just some knowledgeable Samaritan willing to help a n00bie out. Should I just build my renderable model class with a list of shaders that will need to be applied to it, update said list based on said model's position, and then in the object render code go through this list and apply each shader? Is that the way to do it? (Hey don't laugh!) Thankee Ally
Advertisement
I have a pretty limited understanding of shaders as well, so please, bear with me.

Try to think of it more this way - if you want something rendered a particular way, it needs to be rendered using a particular shader. Allow me to clarify:

Suppose I have a model car with 3 different materials on it - tires, the body, and the windows. The tires are black, and reflect light differently than the rest of the car. The body is shiny, and has a metallic paint job. The windows are shiny and allow light to pass through. Assume at this point that there is just one general light in the scene (the sun). Now by looking at it this way, we see that it requires 3 different "effects" to draw the car the way that we want it to.

A shader that uses one point light is going to be different from a shader that uses two point lights. Suppose we have a scene with two objects - one that is lit using two point lights and one that is lit using one point light. Depending on the algorithm, you would want to use one shader on one object and another shader on the other object. They both do something similar - lighting the object, but take different parameters and give different results. At this point, don't worry about how many rendering passes each effect takes. Just think of "oh, this part of the model needs to be rendered like this, so I need to use this effect".

Things become more complicated with more lights. There are a few different ways to handle things. You could write a shader that contains all the possible combinations of lights that you plan on using (more on this later). You could also write many shaders - each one to handle a given situation. The other option is to use something called deferred shading. Deferred shading renders the scene once for each light in the scene, and then adds it all together at the end, giving the appearence that the scene has been lit using multiple lights. There are advantages and disadvantages to each method.

I really don't know a lot about the first two options, as I really haven't dug into shaders myself. (Yes, I know, I really ought to). I realize that there is something called shader fragment linking that allows you to take bits and pieces of shaders, link them together and compile them on the fly (although the compiling could be a potential slowdown). Again, I really don't know much about the subject.

Back to your original post - suppose you have a point light (or a shader that renders things using a point light), and you want to render the whole scene with a point light. Well, do exactly that - render your entire scene using that shader. You want one part of the scene lit with a point light and another part rendered with a directional light? Do exactly that - render the parts of the scene to be lit by the point light using the point light shader, and the other part using the directional light shader. If you want something rendered a particular way, use that particular shader.
Fantastic, ok so I'm starting to get the jist of it.

I'll do a little more experimenting but I suppose I wasn't that off base.

Now, each effect can be applied multiple times in multiple ways right? Say I have a point light that takes in position of the light relative to the object I'm rendering. I can use this light in RoomA, and later I can use it again (same level) as a sun, or a desklamp, or the players flashlight?

What you are saying is that I can't apply shaders in succession? Like, "bump map this, then pass it to my point light, then pass it to my directional light, then pass it to this light" etc. I actually need to make a new shader that does all that in one file?

Eww... =P
Again, with my limited understanding, I think you have got it.

(Although you probably want to use a directional light for your sun, and a spot/cone light for your desk lamp).

I am not sure how DirectX 10 Geometry shaders change things, but to my knowledge (again, which is limited) you are right about one shader doing more or less one effect (ie: spot light with bump mapping).

There was an article floating around (on the old ATi Developer's website, if I recall correctly) that described how the Half-Life 2 shader system worked. It was an interesting read, even though I didn't understand half of it. If you can find it, give it a read.
Quote:What you are saying is that I can't apply shaders in succession? Like, "bump map this, then pass it to my point light, then pass it to my directional light, then pass it to this light" etc. I actually need to make a new shader that does all that in one file?

Eww... =P


Well, you could actually do this (it's called Deferred Shading or Deferred Lighting), but it requires relatively new hardware and the technique has some limitations. It will however allow you to stick on bumpmaps and get everything ready for lighting and then do a 'pass' for each light affecting the scene. The beauty of this technique is that the lighting passes are quite cheap, so you can support a large amount of lights for added realism without it dragging down your performance.

For the rest, Moe's pretty much spot on. Effects basically are materials with lighting baked in. Normally you'll have one or two generic shaders that will work for most 'materials' though, by simply applying the same shader but with different color, specular and/or normal maps. This may sound limiting, but as you'll typically want everything in your scene to use the same style and lighting, one shader often is enough to render the majority of your content.
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Draw it on paper.

- Effects   - Techniques (many techniques make up one effect)    - Passes (many passes make up one technique)  - Model parts (many parts make up one model)


From the sounds of things you are trying to get your head around a very complex system. You might even be trying to bite off more than you can chew.

There are lots of potential many:one, one:many and many:many relationships involved in rendering a 3D scene. Yes, you can just brute-force it and it will work, but for an optimal rendering path you really need to understand how the various entities relate to each other.

The D3D pipeline is actually very simple. There are a number of steps involved from feeding data in to getting an image out. Each of these can be configured and when a draw call is initiated it will process the input data according to whatever pipeline configuration exists at the time of that draw call.

Doing as little work to configure the pipeline is an important design concern with D3D9. I've had a lot of success by implementing this using traditional graph theory - something made much easier by my opening statement of drawing it out on paper. Research efficient algorithms to traverse the graph in as simple a way as possible.

Quote:Effects basically are materials with lighting baked in.
I don't like to be pedantic, but the division is not really true. A material is perceived by the human eye by the way that [in]direct light interacts with it. They aren't really seperate things as we can't see anything unless light reflects from it to the observer...

Quote:This may sound limiting, but as you'll typically want everything in your scene to use the same style and lighting, one shader often is enough to render the majority of your content.
I concur with this in practice, but again - you may wish to select different lighting models to express different materials. But this is they key importance behind the OP's question - designing the implementation of FX files should be independent of the data/code used to render them! A good architecture should allow you to prototype with Blinn-Phong and later drop in Cook-Torrance or Strauss without any changes to the underlying code!


hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:
Quote:Effects basically are materials with lighting baked in.
I don't like to be pedantic, but the division is not really true. A material is perceived by the human eye by the way that [in]direct light interacts with it. They aren't really seperate things as we can't see anything unless light reflects from it to the observer...


I was mainly drawing this out in terms of modelling packages. Typical shaders represent model materials (texture, bump etc) combined with the lighting calculations, which are often defined seperately in modelling apps. You should know by now I wouldn't endavour to make any comparissons with real life [wink]


Quote:designing the implementation of FX files should be independent of the data/code used to render them! A good architecture should allow you to prototype with Blinn-Phong and later drop in Cook-Torrance or Strauss without any changes to the underlying code!


While this is true, I think the main discovery for the OP was that he doesn't need an effect for each light or for different texture configurations (do correct me if I'm wrong, I don't mean to sound pedantic [smile]). In practice you might swap out a few lighting models in the prototyping phase, but in general I've found one will stick with one main shader for most of the final rendering, with indeed a few specialized effects for things like water etc. I think this is the key point for the OP.

But hey, I'm just a grumpy coder working through the night, so as always no offence :)

Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
You guys are fantastic, thank you.

I have some more studying to do, but this really helps put things in perspective for me, I appreciate it. This thread has been very helpful.

This topic is closed to new replies.

Advertisement