Jump to content
  • Advertisement
Sign in to follow this  
Jiia

Vertex Shader Passes

This topic is 5407 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Can anyone help me understand what each pass of a shader technique actually represents? I'm guess that a pass is defined, as in this example..
technique t_Skin_HW
{
    pass p0
    {
        VertexShader = ( vsArrayHW11[ CurNumBones ] );
    }
}
.. at the p0 part. Good guess huh? [rolleyes] What effects are achieved by adding more than one pass to a vertex shader technique? Or what is the point? I use the ID3DXEffect interface to run my shader, and I have to use a code block such as the following:
UINT Pass, nPasses;
Effect->Begin(&nPasses, D3DXFX_DONOTSAVESTATE | D3DXFX_DONOTSAVESHADERSTATE);

for(Pass=0; Pass<nPasses; Pass++)
{
    Effect->BeginPass(Pass);
    Mesh->DrawSubset(subset);
    Effect->EndPass();
}
Effect->End();

Since I sort subsets before rendering, I do a loop of passes for each subset. What exactly is allowed in-between Effect->Begin() and Effect->End()? One mesh render? One subset render? One technique render? Can I enclose every render that uses a certain technique into it? Thanks for any help [smile] [help]

Share this post


Link to post
Share on other sites
Advertisement
Quote:
What effects are achieved by adding more than one pass to a vertex shader technique? Or what is the point?


The point is that some techniques are not supported in a single pass on different hardware, and you need to have more passes to achieve a certain effect. Using a rather outdated example: take some hardware that could only render one texture at a time (so no simultaneous textures...) So how would you go about light-mapping using this hardware? You couldnt apply a texture and a lightmap on the same surface because the hardware allows just one texture. You could however do it in two passes. First pass you'd render the object with its normal texture, then right after rendering that object youd change the texture, turn alpha blending on, and render the lightmap with certain values for source and destination alpha, and you'd get a very similar if not same effect as you would've by rendering the object with two textures modulated together in a single pass.

Quote:
One mesh render? One subset render? One technique render? Can I enclose every render that uses a certain technique into it?


Whatever needs to be rewndered can be. For example if you have a box that uses a certain shader plus specific render states to make it look pretty, then you can render as many of those boxes inside the begin/end scene, and all the boxes will be rendered with the same "state" applied to it.

fx files basically represent the state of a graphics card. Not just render state, but texture states, sampler states, which vertex/pixel shader is set and quite a few other things. Whenever you have multiple objects that use the same state (ie: they all look similar), you can throw them all to the hardware inside a single begin/end block.

Share this post


Link to post
Share on other sites
EDIT: What IFooBar said [smile]. It's funny how we used the same example.

Quote:
Can anyone help me understand what each pass of a shader technique actually represents?

Passes are a separate concept from shaders - you can have multiple passes with the FFP.

Quote:
What effects are achieved by adding more than one pass to a vertex shader technique? Or what is the point?

Sometimes you need to do things in mutliple passes, e.g. using frame-buffer blending to achieve some effect. For (a somwhat old) example, pass one renders a lightmapped world, pass two adds a detail map, ...etc

Quote:
What exactly is allowed in-between Effect->Begin() and Effect->End()? One mesh render? One subset render? One technique render? Can I enclose every render that uses a certain technique into it?

Anything you render between Begin/End will be using the settings (states) set by the current pass.

Share this post


Link to post
Share on other sites
So begin and end only need to be called per rendering states. If I'm setting rendering states myself, I'm wondering what is actually happening in Effect->Begin(). Or even BeginPass and EndPass. I guess I put myself in crutches using the ID3DXEffect object. The documentation says Begin() "Starts an active technique". If it only starts a technique, why then does it not just accept the technique handle/string? Why would there be a seperate Effect->SetTechnique() call?

I'm thinking about dumping off ID3DXEffect and trying to do it myself. Would that be a bad idea? As in - will I need to code a massive amount of uncomplicated routines to achieve the same results? My only reason would be understanding. If there's not much to grasp in it, then I would pretty much be wasting time. Any reference to specific functions / interfaces is appreciated.

Thanks for the help!

Share this post


Link to post
Share on other sites
Quote:
Original post by Jiia
So begin and end only need to be called per rendering states


sets the various states required for the selected technique.

Quote:
Original post by Jiia
I'm wondering what is actually happening in Effect->Begin(). Or even BeginPass and EndPass.


it probably just saves off the current device state (begin i mean) so that "end()" can restore the device state to what it was before begin. and begin/endpass probably just set the states required to render the chosen technique using the chosen pass

Quote:
Original post by Jiia
I guess I put myself in crutches using the ID3DXEffect object.


Not at all. Its a very nifty system IMO. you can have all render methods in effect files and you can have different techniques corresponding to different levels of hardware. then by usin validate technique you'll get the technique that is most suitable for the current hardware. For example you could implement some effect using ps2 and the same technique using ps1.1 then when run on hw that supports ps2, the better technique will be used. ps1 hw will choose the latter technique.

Then when you have common hw that supports ps3, you can just add a technique to the effect file that uses ps3 and let the user download only the effect file to see prettier renderings. Also comes in handy when you find a better way to do the same technique.

Quote:
Original post by Jiia
why then does it not just accept the technique handle/string? Why would there be a seperate Effect->SetTechnique() call?


That just seems like a design issue. but maybe there's more to it, dont really know.

Quote:
Original post by Jiia
I'm thinking about dumping off ID3DXEffect and trying to do it myself. Would that be a bad idea? As in - will I need to code a massive amount of uncomplicated routines to achieve the same results? My only reason would be understanding. If there's not much to grasp in it, then I would pretty much be wasting time.


From the looks of it you'll be in for a lot of parsing headaches. If you want a good parsing exercise then by all means, go for it. The routines wont be uncomplicated though. They can get pretty complex.

Quote:
Original post by Jiia
EDIT: What IFooBar said . It's funny how we used the same example.


[smile]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
currently all blur and some glow like effects need to be multipass in current gen hardware because you can't access the framebuffer from a shader, so you render to texture on the first pass and use that on the second pass.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by IFooBar
Not at all. Its a very nifty system IMO. you can have all render methods in effect files and you can have different techniques corresponding to different levels of hardware. then by usin validate technique you'll get the technique that is most suitable for the current hardware. For example you could implement some effect using ps2 and the same technique using ps1.1 then when run on hw that supports ps2, the better technique will be used. ps1 hw will choose the latter technique.

I didn't realize this was exclusive to the interface. I had thought that all shaders could be loaded in from a file, without the use of the interface. I also use HLSL. Isn't HLSL a MS creation?

Quote:
Original post by IFooBar
Then when you have common hw that supports ps3, you can just add a technique to the effect file that uses ps3 and let the user download only the effect file to see prettier renderings. Also comes in handy when you find a better way to do the same technique.

I'm guessing the graphics card is uploaded with a set of instructions for each technique? So you just mean that loading this data from a file would not be as convinient without ID3DXEffect, correct? It must still be possible, ..errr I would think.

Quote:
Original post by IFooBar

Quote:
Original post by Jiia
EDIT: What IFooBar said . It's funny how we used the same example.

[smile]

Hehe, that wasn't my post [smile]

Thanks a lot for your help, IFooBar + Coder.

Share this post


Link to post
Share on other sites
Quote:
I didn't realize this was exclusive to the interface. I had thought that all shaders could be loaded in from a file, without the use of the interface. I also use HLSL. Isn't HLSL a MS creation?


you can definetly load all shaders in from a file without fx files. You can achive the same behaviour by simply checking the shader version the hw supports and then loading in the corresponding shader file, ie:


if ps >= 2.0
load bumpmap2
else
load bumpmap1


but that hardcodes it. What would happen if you wanted to add support for a bumpmap3 using ps3.0? you'd need to go and change the code and add another if statement. That would require a recompile - not fun

THough with an effect file you just put the most demanding techniques at the top of the file, and then the effect interface will analyze each technique in turn and return a handle to the first one that is validated on the hw it is run on. That way it just becomes a matter of:


handle h = effect->FindValidTechnique();
// h now points to the "best" technique on *this* hw


Now to add ps3 support, you just add the ps3 implementation to the top of the effect file, and the code stays the same. No need for a recompile, just redistribute the effect file. Now 'h' will be a handle to the ps3 technique if it is valid on the hw.

and yes, hlsl is an ms creation. Actually, i think it was a collaboration between ms and nvidia, not sure though...

Quote:

I'm guessing the graphics card is uploaded with a set of instructions for each technique? So you just mean that loading this data from a file would not be as convinient without ID3DXEffect, correct? It must still be possible, ..errr I would think.


correct, the effect framework is at its most basic level, simply a convenience for developers. There's nothing stopping you from making your own set of routines that does the same (more or less depending on your project needs that is).

If you're interested, you should read this in which a similar system using dlls is described.

Quote:
Hehe, that wasn't my post

Thanks a lot for your help, IFooBar + Coder.


oops, forgot to change the name in the quote tag :)

And you're welcome.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!