Gpu Stress

Started by
7 comments, last by mooni 8 years, 6 months ago

Hello

Does anyone know of any benchmark using which we can stress a particular pipeline stage isolatedly. For example, stressing only the Vertex shaders or increasing stress using Tessellation alone. I am trying to analyze how the game play is effected under different stress.

Also, if i want to create a benchmark myself , say for example i want to stress the vertex shader part. Does simply adding some dummy calculation in the vertex shaders do the trick?

It will be great if one could give guidelines on creating a simple GPU-stress benchmark.

Thanks.

Advertisement

You're not going to find such a thing because modern GPUs aren't architecturally segregated based on "pipeline stage". As such, there is no possible answer to your question. Architecturally, GPUs are just banks of processors, and new "pipeline stages" are added by extending new architectural capability to all these processors.

With that said, there are well understood GPU benchmarking suites, such as GPUBench, that are widely used in research, and are designed to stress different architectural elements.

Thanks for your response and will look into GPUbench. So if i want to create my own benchmark, How do i approach it?

For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?


For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?
Most likely NOT, unless those calculations go in your results, at the point they propagate to other stages. If by 'dummy' you mean: calculate something and throw away the result that'll go in dead code elimination right away.

Previously "Krohm"

Thanks for your response and will look into GPUbench. So if i want to create my own benchmark, How do i approach it?

For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?

Can we just skip the question for a moment, and assume you have whatever data you'd get from this process. What knowledge do you hope to extract from this data, and why would you be confident that this knowledge would be inferred by that data?

Stress testing the vertex stage is hard nowadays insofar as the vertex stage runs on the exact same execution unit as e.g. the fragment shader, and it has several non-obvious bottlenecks, such as e.g. texture fetch that you can "stress out" without really giving the actual execution unit running the vertex shader much stress.

But if you are interested in stress-testing the vertex stage, you would traditionally turn off vertical sync, disable the geometry shader (if any) and make a zero-size viewport (resulting in no fragments being processed). Not like this is going to tell you an awful lot on a present-time GPU.

Like already stated GPU's have for a while now have had unified shaders. (same hardware running vertex,hull,domain,geometry, and fragment shader). You can write benchmarks that try to isolate vertex processing for example but if your goal is to use this to help with the performance of your game/engine you'd be better off benchmarking your own code and trying to see what your bottlenecks are. And under what conditions they happen.

For example if you were to try to write a benchmark to stress vertex processing you could write a routine that writes to a small section of your frame buffer (so as to fit in the FB and z cache) and draw front to back so most of the pixels are culled by early/hi-z. You'd stay within a draw call budget (have to figure out what that is), minimize state changes and draw a minimum of 1000 (don't know if this is an up to date number) triangles per call. You'd also disable the geometry shader and potentially tessellation stages, and use a simple constant color fragment shader. But what would this get you really?

Also you might wanna read up on how modern gpu's work so I put this in here:

https://developer.nvidia.com/content/life-triangle-nvidias-logical-pipeline

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/

-potential energy is easily made kinetic-

Thanks for your response and will look into GPUbench. So if i want to create my own benchmark, How do i approach it?

For instance just stressing the GPU, am i good just creating several dummy calculations in a loop in shaders?

Can we just skip the question for a moment, and assume you have whatever data you'd get from this process. What knowledge do you hope to extract from this data, and why would you be confident that this knowledge would be inferred by that data?

I am planning to test how my application-game behaves when GPU is stressed and wanted to find out if the application would actually crash.

Like already stated GPU's have for a while now have had unified shaders. (same hardware running vertex,hull,domain,geometry, and fragment shader). You can write benchmarks that try to isolate vertex processing for example but if your goal is to use this to help with the performance of your game/engine you'd be better off benchmarking your own code and trying to see what your bottlenecks are. And under what conditions they happen.

For example if you were to try to write a benchmark to stress vertex processing you could write a routine that writes to a small section of your frame buffer (so as to fit in the FB and z cache) and draw front to back so most of the pixels are culled by early/hi-z. You'd stay within a draw call budget (have to figure out what that is), minimize state changes and draw a minimum of 1000 (don't know if this is an up to date number) triangles per call. You'd also disable the geometry shader and potentially tessellation stages, and use a simple constant color fragment shader. But what would this get you really?

Also you might wanna read up on how modern gpu's work so I put this in here:

https://developer.nvidia.com/content/life-triangle-nvidias-logical-pipeline

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/

Thanks for your suggestions , will look into the links. FYI, I am just trying to test my application and note its behavior under various stress.

Stress testing the vertex stage is hard nowadays insofar as the vertex stage runs on the exact same execution unit as e.g. the fragment shader, and it has several non-obvious bottlenecks, such as e.g. texture fetch that you can "stress out" without really giving the actual execution unit running the vertex shader much stress.

But if you are interested in stress-testing the vertex stage, you would traditionally turn off vertical sync, disable the geometry shader (if any) and make a zero-size viewport (resulting in no fragments being processed). Not like this is going to tell you an awful lot on a present-time GPU.

Thanks and will look into the procedure you suggested.

This topic is closed to new replies.

Advertisement