I think I may be having a bottleneck when passing parameters to my HLSL effects.
I'm using Deferred Rendering & each model knows how to draw itself (has a draw method).
I tell each model from outside which technique to apply or if to use a specified technique within the model (previously set in loading time).
They always use the effect set by my custom model processor.
I arranged everything so the shadowing technique & the normal drawing technique are inside of the same effect file, so i always use the same effect, the same draw method & the same parameters: depending on the technique I send to the unused params "Matrix.Identity" for example (just to send something & let it work).
But I think that may not be very good...
I'm thinking on splitting the effect files, & specifying them during runtime (just like I do with the Techniques). But the problem is the parameters in the draw method of the model. Some time ago I was having a desing in which the models didn't know how to draw themselves & were drawn by a "Deferred Renderer", a "Shadow Renderer", etc. I keep some of this design, but the final "draw" thing is inside of each model (even I think it was a better design, I changed it because of compatibility with octrees, where you don't know what u put inside, but they just "draw" themselves).
So... how do u advice me to face this problem?
May the bottleneck actually be there or may I be completely off aim? (is it possible to profile this kind of thins in PIX? Do u advice me any other, maybe better, software?)
Thanks a lot
2 replies to this topic
Moderators - Reputation: 7427
Posted 23 June 2012 - 04:58 PM
Setting shader constants *can* be a performance burden if you set them many many times during the frame. However it is mainly a CPU performance issue, so the first thing you'd want to do is try to decide if you're primarily CPU-bound or GPU-bound. There are a few ways of doing this...PIX can give you a rough idea if you use the first option (capture statistics for each frame), by displaying the frame timeline for both the CPU and GPU. If you're GPU-bound you will generally see the GPU completely active (no greyed-out idle sections) with the CPU working 2 or so frames ahead of the GPU. If you're CPU-bound you will generally see large gaps of idle or inactive time in the GPU timeline, and the CPU won't be 2 frames ahead. You can also look into Nvidia's Parallel Nsight or AMD's GPU PerfStudio, both of which offer comprehensive profiling tools.You can also try some of the more low-tech solutions suggested here. Either way you should make sure that you turn off VSYNC and fixed time step, to make sure there's no additional waiting being introduced into your frame time.