Shader passes

Started by
3 comments, last by jollyjeffers 19 years, 4 months ago
I am working on optimizing my rendering engine and implementing shaders. I want to know if it is valid to do the following with a render pass to minimize the switching of the shader programs. Lets say I have a vertex buffer that is drawing the same object in different locations with different shader parameters. It seems to me that it would be faster to do the following(If things are not going to get messed up): Set the source stream of the vertex buffer. Set the shader for the current pass. for all of the different instances. { Set the instances shader parameters. Render the pass. } Do the next pass. Compared to doing it this way: Set the source stream of the vertex buffer. for all of the different instances. { Set the shader for the current pass. Set the instances shader parameters. Render the pass. Do the next pass. } I'm not sure if doing it the first way is going to screw stuff up at all, it seems faster but I have a feeling I should keep it the second way. Any ideas?
Advertisement
My undestanding is that it is more costly to change shaders then it is to bind streams. How about making a setup that let's you profile both cases?
As a general rule of thumb, you want to implement a sort of "bucket sort" based on minimizing the number of state changes of any kind.

The 4 main "bad ones" as such are shaders, textures, vertex buffers and index buffers. Although, any state-changing is gonna hurt you a bit.

So, try and arrange your render path so it changes as infrequently as possible. Which, based on the information you've presented seems to be what you're doing.

If all geometry comes from the same VB, set that at the start and don't change it. If a whole load of instances use the same shader, batch them together... and so on.

Maybe that'll help?
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Yes you are correct I am working sorting with buckets, prior to this major overhaul I used an example sort of like that presented in the 3D Game Engine Programming book. That being that I sorted the vertex buffers into buckets which matched the textures represented by a "skin". Now I am using something more advanced which is I am still designing for the most part. I have materials which have techniques so when you draw a vertexbuffer you give it a material and draw params. The draw params hold shader constants and the technique to use when drawing. So then all of this is cached until it is forced to render.

What I am thinking about sorting by is #1 The material used. Then batch the same techniques together so that I can render the passes one at a time where all the vertexbuffers of that technique can be rendered before I move on to the next pass. So my question still remains...Can I draw multiple objects on a single pass then move to the next pass with the same objects. I am not 100% sure what is going on when you layer the passes together so I am afraid something is going to get screwed up.

I have been doing a lot of looking around for more info on working with shaders at runtime it gets quite complex dealing with all the different shader formats. Thanks for the help, I would love to continue to discuss this topic so please do reply.
Quote:Can I draw multiple objects on a single pass then move to the next pass with the same objects.
Yes, should be fine. I can only comment based on Direct3D's "way" as I've never done much OpenGL work.

You might even, in some cases, get a minor speed up by doing it this way. if you have lots of overlapping objects with the same 3-pass material (for example) then the overlapping/z-rejection might reduce the number of pixels for the 2nd/3rd pass on some objects. Although, I wouldn't rely on that - just a bonus result really.

Quote:I am not 100% sure what is going on when you layer the passes together
For the most part, what will happen is this:

You have a clean or partially filled render-target. You set your device up to use a given shader. You despatch 10 objects to be rendered, these eventually get rasterized via the pixel shader. The various calculations are performed and a value is written to the render-target. You then reconfigure your device for the second pass, and despatch all 10 objects again. This time the starting colour should factor in what is already in the render target as the starting point, calculate a new set of values and overwrite what was there. What is now in the render-target is an accumulated value of the 2 passes. Repeat for all subsequent passes... the final value that gets displayed to the screen should, effectively, be the result of applying the calculations from all n passes.

Quote:I am afraid something is going to get screwed up
It is possible that this will happen (overwriting values instead of combining them for example), but if (at least under D3DX) you use the effect framework the API should make sure it works out as expected.

Quote:more info on working with shaders at runtime
I dont have a link to hand, but you might wanna search for the thread in "Graphics Programming And Theory" about how to write a shader-based architcture, had a lot of discussion with YannL when he was still active. Shouldn't be too hard to find it (might be in the forum FAQ) as it pretty much attained legendary/reference status.

If you have the DirectX9 SDK to hand, you might want to have a play with the "StateManager" effect-framework sample. Without a state manager there are ~7800 state changes per frame, with a state manager it drops down to little over 500, with a 15-20fps speed-up.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

This topic is closed to new replies.

Advertisement