Sign in to follow this  

Gpu Stress

This topic is 803 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello

         Does anyone know of any benchmark using which we can stress a particular pipeline stage isolatedly. For example, stressing only the Vertex shaders or increasing stress using Tessellation  alone. I am trying to analyze how the game play is effected under different stress.

 

Also, if i want to create a benchmark  myself , say for example i want to stress the vertex shader part. Does simply adding some dummy calculation in the vertex shaders  do the trick? 

 

It will be great if one could give guidelines on creating a simple GPU-stress benchmark.

 

 

 

Thanks.

Share this post


Link to post
Share on other sites

Thanks for your response and will look into GPUbench. So if i want to create my own benchmark, How do i approach it?

 

For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?

Share this post


Link to post
Share on other sites


For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?
Most likely NOT, unless those calculations go in your results, at the point they propagate to other stages. If by 'dummy' you mean: calculate something and throw away the result that'll go in dead code elimination right away.

Share this post


Link to post
Share on other sites

Thanks for your response and will look into GPUbench. So if i want to create my own benchmark, How do i approach it?

 

For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?

Can we just skip the question for a moment, and assume you have whatever data you'd get from this process.  What knowledge do you hope to extract from this data, and why would you be confident that this knowledge would be inferred by that data?

Share this post


Link to post
Share on other sites

Stress testing the vertex stage is hard nowadays insofar as the vertex stage runs on the exact same execution unit as e.g. the fragment shader, and it has several non-obvious bottlenecks, such as e.g. texture fetch that you can "stress out" without really giving the actual execution unit running the vertex shader much stress.

 

But if you are interested in stress-testing the vertex stage, you would traditionally turn off vertical sync, disable the geometry shader (if any) and make a zero-size viewport (resulting in no fragments being processed). Not like this is going to tell you an awful lot on a present-time GPU.

Edited by samoth

Share this post


Link to post
Share on other sites

Like already stated GPU's have for a while now have had unified shaders. (same hardware running vertex,hull,domain,geometry, and fragment shader).  You can write benchmarks that try to isolate vertex processing for example but if your goal is to use this to help with the performance of your game/engine you'd be better off benchmarking your own code and trying to see what your bottlenecks are.  And under what conditions they happen.

 

For example if you were to try to write a benchmark to stress vertex processing you could write a routine that writes to a small section of your frame buffer (so as to fit in the FB and z cache) and draw front to back so most of the pixels are culled by early/hi-z.  You'd stay within a draw call budget (have to figure out what that is), minimize state changes and draw a minimum of 1000 (don't know if this is an up to date  number) triangles per call.  You'd also disable the geometry shader and potentially tessellation stages, and use a simple constant color fragment shader.  But what would this get you really?

 

Also you might wanna read up on how modern gpu's work so I put this in here:

https://developer.nvidia.com/content/life-triangle-nvidias-logical-pipeline

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/

Share this post


Link to post
Share on other sites

 

Thanks for your response and will look into GPUbench. So if i want to create my own benchmark, How do i approach it?

 

For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?

Can we just skip the question for a moment, and assume you have whatever data you'd get from this process.  What knowledge do you hope to extract from this data, and why would you be confident that this knowledge would be inferred by that data?

 

I am planning to test how my application-game behaves when GPU is stressed and wanted to find out if the application would actually crash.

Share this post


Link to post
Share on other sites

Like already stated GPU's have for a while now have had unified shaders. (same hardware running vertex,hull,domain,geometry, and fragment shader).  You can write benchmarks that try to isolate vertex processing for example but if your goal is to use this to help with the performance of your game/engine you'd be better off benchmarking your own code and trying to see what your bottlenecks are.  And under what conditions they happen.

 

For example if you were to try to write a benchmark to stress vertex processing you could write a routine that writes to a small section of your frame buffer (so as to fit in the FB and z cache) and draw front to back so most of the pixels are culled by early/hi-z.  You'd stay within a draw call budget (have to figure out what that is), minimize state changes and draw a minimum of 1000 (don't know if this is an up to date  number) triangles per call.  You'd also disable the geometry shader and potentially tessellation stages, and use a simple constant color fragment shader.  But what would this get you really?

 

Also you might wanna read up on how modern gpu's work so I put this in here:

https://developer.nvidia.com/content/life-triangle-nvidias-logical-pipeline

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/

 

Thanks for your suggestions , will look into the links. FYI, I am just trying to test my application and note its behavior under various stress.

Share this post


Link to post
Share on other sites

Stress testing the vertex stage is hard nowadays insofar as the vertex stage runs on the exact same execution unit as e.g. the fragment shader, and it has several non-obvious bottlenecks, such as e.g. texture fetch that you can "stress out" without really giving the actual execution unit running the vertex shader much stress.

 

But if you are interested in stress-testing the vertex stage, you would traditionally turn off vertical sync, disable the geometry shader (if any) and make a zero-size viewport (resulting in no fragments being processed). Not like this is going to tell you an awful lot on a present-time GPU.

 

Thanks and will look into the procedure you suggested.

Share this post


Link to post
Share on other sites

This topic is 803 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this