• Advertisement

Archived

This topic is now archived and is closed to further replies.

how do you measure shader speed?

This topic is 5312 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''ve been playing around with RenderMonkey 1.0 and I''ve modified the Shadow Mapping shader slightly so the disassembly show''s one less instruction (it''s an ALU op). Does this mean the shader will be faster, or is it more complicated than that? Do all ALU unstructions take the same time to execute? If not, how I can I test the speed of the shader? This is what I changes at the end of the Objects pixel shader: WAS: // If the depth is larger than the stored depth, this fragment // is not the closest to the light, that is we are in shadow. // Otherwise, we''re lit. Add a bias to avoid precision issues.
   float shadow = (depth < shadowMap + shadowBias);
   // Cut back-projection, that is, make sure we don''t lit
   // anything behind the light. Theoretically, you should just
   // cut at w = 0, but in practice you''ll have to cut at a
   // fairly high positive number to avoid precision issue when
   // coordinates approaches zero. 
   shadow *= (shadowCrd.w > backProjectionCut);

   // Modulate with spotlight image
   shadow *= spotLight;

   // Shadow any light contribution except ambient
   return Ka * modelColor + (Kd * diffuse * modelColor + Ks * specular) * shadow;
 
I changed to:
   if(depth > shadowMap + shadowBias || shadowCrd.w < backProjectionCut)
   {
	specular = 0.0;
	diffuse = 0.0;  
   }

   return Ka * modelColor + (Kd * diffuse * modelColor + Ks * specular)*spotLight;
 

Share this post


Link to post
Share on other sites
Advertisement
Anybody? I just wanna know if all ALU pixel shader instructions run at the same speed, or if they are variable length.

Share this post


Link to post
Share on other sites
They are variable length.. In the documentation for the shaders in DX9, it says how many "instruction slots" each instruction take, but it''s kinda up to the hardware/driver how fast it is executed.

Anyway, the link is here:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_c/directx/graphics/reference/shaders/shaders.asp

I don''t think there''s a simple way to measure the execution time other than to max out the fillrate and measure the total rendertime. If there is, I would like to know as well..

Share this post


Link to post
Share on other sites
quote:
Original post by Pseudo
how I can I test the speed of the shader?



Just do it empirically, i.e. shade a lot of polygons, and time how long it takes to do it your way, and how long it takes to do it the other way. Then you also find out how much you've sped it up by quantitatively. In fact, if you ever want to see if you've actually optimised something I would reccomend doing this. No matter what should be faster in theory, weird things happen.

[edited by - furby100 on October 8, 2003 6:06:32 PM]

Share this post


Link to post
Share on other sites
AFAIK , in general , 1 instruction = 1 GPU cycle. So , the execution time of the shader increases in a linear manner as more instructions are added.

Share this post


Link to post
Share on other sites
As far as I know, ATI''s offerings run at mostly 1 instruction per cycle, whereas at least the GeForceFX runs sthings a lot differently. Profiling is the only way to find out if it''s actually faster.

- JQ
Yes I do have holidays at the moment. And yes, that does explain the increased posting.

Share this post


Link to post
Share on other sites

  • Advertisement