Are HLSL results repeatable accross different cards?

Started by
4 comments, last by kal_jez 17 years, 11 months ago
I'm playing around with some random map generation and trying a few diffrent methods such as perlin and hills (using MDX1.1 and C#), and I will probably settle on a mixture of both. However i would love to speed the process up as the levels I'm creating are very large and the processing time is a little too long. I have done some work with shaders using HLSL 2.0 to speed the process up a lot. My question is if I run it on a GPU from NVIDIA and then on a GPU from ATI or Intel etc will the final results be the same? Are there HLSL functions that will get diffenrent results. Basically i want to run formula over large textures and use the results for a RTS game map and obviously each player would need the same results. Could this also be a problem with using the GPU for Physics?
Advertisement
If the card is supposedly DirectX 9.0 compliant, then it should work regardless of manufacturer.

As for physics on the GPU, the arithmetic operations used to perform these simulations unified between manufacturers almost from the get-go. No worries.


Actually, especially on SM2-level hardware, there are differences between the internal numerical precision of shader pipelines. Depending on your algorithms and numbers of interdependent passes, the shader results can be quite different when comparing, for example, Ati and nVidia cards' outputs. Most of the time, the results do look the same when presented on screen as color values (but this apparently isn't your scenario).

The shader model 3 architecture and up define more strict standards for numerical precision - if I remember correctly, the full 32-bit IEEE floating-point precision is required internally on these cards.

Niko Suni

Quote:Original post by Nik02
The shader model 3 architecture and up define more strict standards for numerical precision - if I remember correctly, the full 32-bit IEEE floating-point precision is required internally on these cards.


Yes, every SM3 card need to support 32 bit floating point but if the HLSL shader contains half data types the cards can still make use of the 16 bit format. Additional the spec does not force full IEEE compatibility if it comes to the calculation rules. If you operate in frontiers of the floating point format (near zero, infinity, …) the results can still be different.
Quote:Original post by Demirug
Quote:Original post by Nik02
The shader model 3 architecture and up define more strict standards for numerical precision - if I remember correctly, the full 32-bit IEEE floating-point precision is required internally on these cards.


Yes, every SM3 card need to support 32 bit floating point but if the HLSL shader contains half data types the cards can still make use of the 16 bit format. Additional the spec does not force full IEEE compatibility if it comes to the calculation rules. If you operate in frontiers of the floating point format (near zero, infinity, …) the results can still be different.


True that. The point is, it's not wise to assume that the output of two different cards is the same, given the same input (except when talking about high-end workstation cards where a particular precision is usually guaranteed).

Niko Suni

Thanx guys,
I did kind of suspect as combustion's network rendering states that all machines have to have the same graphics card to produce the same rendering when using hardware rendering. Guess I'm just going to have to make the CPU based stuff faster, as the concept is to prouduce maps for multiplayer games based on idedentical random seeds in a pseudo random generator to produce new maps for every RTS match so if the calc's come out different the games will never match :(.

Though I will have to do some experiments, perhaps I can speed up some things up by using careful choices of functions. I'll post any results I get.

This topic is closed to new replies.

Advertisement