Jump to content
  • Advertisement
Sign in to follow this  
deffer

float accuracy on different CPUs (multiplayer physics)

This topic is 4700 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I was wandering whether I could rely on a code being executed on two different CPUs giving me same results (with same every bit) - I need this for accurate physics simulation in a multiplayer game. Let's assume the code is compiled once only, and the two HWs are using the exact same executable. What I'm afraid of, is that newer CPUs have this habit to tweak the code a bit while executing it. I don't know how far a CPU can go with that. Second thing, is possibility of executing instructions with different precision settings. I got this feeling that my app could force it at the beggining of execution, but I'm not sure... Anyway, I'm quite green on the topic, so anybody mind telling me what I should avoid? Or that I'm just going nuts ;)

Share this post


Link to post
Share on other sites
Advertisement
I believe Intel and AMD use different precisions for floating point calculations internally(80 bit vs 64 bit or something), so no I wouldn't count on getting the exactly same result for a calculation

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Yeah, I don't think it would remain in-sync for long (if I recall even Age of Empires3 had this problem in multiplayer and they were forced to cut down on the role of their physics in multiplayer). I suppose what you could do then is have the host in charge of the physics and just update the positions of objects as you would any other unit/player...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster

x87 should handle everyting as 80 bit extended precision by default. (Intel specifies this is still true in P4 in their instruction set refrence manuals).
Now, the internal format is often more, AMD I think uses 90 bit registers for fp values, 3 bits are meta-data 87 bits are the value.
I don't know Intel's internal format.
For complex instructions that have intermediate results (like square root, reciprocal, I think division) each result is rounded to fit the internal format, and only at the end of the instruction rounded back to 80 bit EP format. That might be able to produce single bit rounding differences in the final result - though it would certainly be an unlikley result.
That small of a difference should not ever effect the value of a single or double, but rounding back to those formats are only done when writing back to memory. So long as it's on the FP stack it will remain in 80 EP format.

I don't know how internal rounding is handled with SSE, but I would think any intermediate results would be rounded to the SSE specified format (double or single) which would eliminate any possible differences in rounding.

Share this post


Link to post
Share on other sites
Well, this really 5uck5, then.
Can I get a second oppinion? [grin]

Now seriously, that means clients would have to be in an exact sync with the host. Frame-to-frame. Receiving positions/velocities/and-what-not all the time.

Isn't it an overkill?

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster

x87 should handle everyting as 80 bit extended precision by default. (Intel specifies this is still true in P4 in their instruction set refrence manuals).
Now, the internal format is often more, AMD I think uses 90 bit registers for fp values, 3 bits are meta-data 87 bits are the value.

[...]
which would eliminate any possible differences in rounding.


So you think I could ignore the differences, as they will be likely rounded and thus disappear?

"Likely" seems not enough, though.
But I could be sending updates from the host every now and then. At least not every turn...

Share this post


Link to post
Share on other sites
Hm.. I've never heard of any differences between Intel and AMDs representations, but even so wouldn't it still work to set a reduced precision in the floating point control word?
But I suppose I wouldn't be too surprised if a few small errors crept in anyway..
Quote:
Original post by deffer
Now seriously, that means clients would have to be in an exact sync with the host. Frame-to-frame. Receiving positions/velocities/and-what-not all the time.

Isn't it an overkill?
It depends, it is what most games do.

A deterministic simulation has the obvious advantage of allowing a network model based on input reflection and only having to transmit a small and constant amount of data per player. Another benefit is having small replay files.

But it's also *much* harder (although not impossible) to compensate for lag with such a metod. Considering that you'll also be forced into using fixed time steps, having to deal with nasty synchronization bugs and probably be forced to use fixed point it'd say it's generally not worth it.
Another annoying issue is that all players are forced to stop their simulation of a single packet from one of the players is dropped. This can be covered up somewhat but becomes a major issue with too many players.

Still.. It does depend a lot on the game type, for an RTS you probably don't have a choice due to the large amount of units in the world.

Share this post


Link to post
Share on other sites
I can tell you with confidence that it will differ between PCs with different procesors. I know of programs where this was a problem. Heck, I've even tried running one of those programs on a different computer and seen the problem for myself!

Sending an update to sync things every now and then might be a good idea. It shouldn't have to be too often though.

The anchient fdiv bug of the early pentiums would go away if you clocked the CPU down a bit aparently. Though I'm not saying that such a thing is still a problem.

Share this post


Link to post
Share on other sites
Quote:
Original post by deffer
Well, this really 5uck5, then.
Can I get a second oppinion? [grin]

Now seriously, that means clients would have to be in an exact sync with the host. Frame-to-frame. Receiving positions/velocities/and-what-not all the time.

Isn't it an overkill?
It doesn't mean you need perfect sync at all. What you do is have clients and host both do the physics, so the client see results right away, but the host has authority over everything and every once in a while (say every 100ms) the host sends the clients the 'real' physics info. This way, clients see results instantly (at least from their own actions), but they never have time to get noticably off.

Share this post


Link to post
Share on other sites
Ok, so I can send a 'real' update every n miliseconds.

But what if the error causes different behaviour in macro-scale. I mean, a character is hit by some other, while on the host sim, it does not at all (quantum theory lurking out, heh). Without immediate validation, even an update every 10ms would be too late in this case, as whole chain of events would be already triggered on client's side.

Are there counter-measures for this?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!