How do you keep a game in sync? (multiplayer)

Started by
36 comments, last by Memir 20 years, 8 months ago
I remember, that we had a similar problem regarding floating point calculations which lead to cumulative errors. We found out, that there is a compiler switch which improves the floating-point consistency and in fact helped us getting things in sync.

Within the Visual Studio you can find it in the project settings under C/C++ Optimization, Floating Point consistency. I think, it''s the /Op switch. Maybe you could try this one.

Regards,
Chris.



Let''s kick some serious ass... Buy BomberFUN online now!
visit http://www.bomberfun.com
Advertisement
You can add the softfloat library and so every computer will generate the same values for the computations (since it does not rely on a Floating point co-processor (FPU))

check it out here:
http://www.jhauser.us/arithmetic/SoftFloat.html

Just an idea.

My Gamedev Journal: 2D Game Making, the Easy Way

---(Old Blog, still has good info): 2dGameMaking
-----
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)

I just read through the entire thread (which some people here obviously did not do), and I am entirely convinced that Calvin has it correct. Updating the position of the balls at the END of the shot won''t fix anything.

You need to make sure that BOTH MACHINES are synched to the same frame rate. That should fix it, and should also be the simplest of all of the proposed solutions.

::You need to make sure that BOTH MACHINES are synched to the
::same frame rate.

What for? He could run the simulation in "cycles". A slower computer would take longer to finish the anuimation. He should still end up wit h all balls on the same position, if the FPU''s behave the same.

Regards

Thomas Tomiczek
THONA Consulting Ltd.
(Microsoft MVP C#/.NET)
RegardsThomas TomiczekTHONA Consulting Ltd.(Microsoft MVP C#/.NET)
dear memir,

i''m a vb programmer and experience the same floating point problem as you do. i thought it was a fault of vb but it''s obvious now that it''s a matter of FPU.

in vb there''s a funtion called:
Round (expression [,numdecimalplaces])
first param is your floating point expression and the second param indicates how many places to the right of the decimal are included in the rounding.

so my point is if you want all "5*5" expressions calculated as "25.0000" on all computers, you need a function like that:
calc = 5 * 5
calc = round (calc,4)

this is how i solved this problem in vb.

i hope it helps...

===========================


What we do in life, echoes in eternity!
What we do in life, echoes in eternity!
> To simulate motion you MUST have time as a reference.

I too think the timebase is the problem since all other quants in the equations are the same across all machines.

> IEEE compliant FPUs can produce marginally different results for some calculations.

Nope. IEEE is a standard CPU makers must adhere to. IEEE-compliant computations always yield the same results, all the time (*). I once worked on a rendering system that ran in parallel across a network of heterogeneous architectures (intel, dec, ibm, sun, cray, sgi) and they all ran the _EXACT_SAME_ calculations all the time {btw, the system which is still marketed today began its life in the late 80s}.

-cb


(*) Not all CPUs work under this standard in their normal running mode, unless you coherce them to. I have the late DEC Alpha in mind here; it did work like a charm under IEEE rules, but at a much lower speed than DEC''s much advertized benchmarks...{grin}
quote:Original post by cbenoi1
> To simulate motion you MUST have time as a reference.

I too think the timebase is the problem since all other quants in the equations are the same across all machines.

> IEEE compliant FPUs can produce marginally different results for some calculations.

Nope. IEEE is a standard CPU makers must adhere to. IEEE-compliant computations always yield the same results, all the time (*). I once worked on a rendering system that ran in parallel across a network of heterogeneous architectures (intel, dec, ibm, sun, cray, sgi) and they all ran the _EXACT_SAME_ calculations all the time {btw, the system which is still marketed today began its life in the late 80s}.

-cb

(*) Not all CPUs work under this standard in their normal running mode, unless you coherce them to. I have the late DEC Alpha in mind here; it did work like a charm under IEEE rules, but at a much lower speed than DEC''s much advertized benchmarks...{grin}


Are you sure about the IEEE standard? The statement you''re responding to comes from Patrick Dickinson''s article, I have never read the IEEE spec or performed any tests myself. But I''m not entirely sure how to interpret your post, do you mean that the problem with minor differences in FPU-results on x86-processors doesn''t exist? Or do you mean that these processors are not IEEE-compliant or at least not IEEE-compliant by default and programmers don''t know how to enable this mode? Your observations of the rendering system doesn''t really prove anything, saying that they never differ based on empirical observations isn''t very convincing. I''m curious if anyone has actually read the IEEE spec (which is supposed to be some kind of monster spec?) and knows the facts.

> Are you sure about the IEEE standard

Yes. IEEE 754 defines how the bits are organized, how they are interpreted, and how computations are performed; it is precisely the aim of the standard to make sure that all FPUs can generate the same numbers through the same computation sequence, otherwise that defeats the purpose of a standard. CPUs that don''t do exactly this can''t be IEEE-compliant. That a CPU implements little-endian instead of big-endian in storage is left out, an it''s not the issue here.

This is what IEEE 754-1985 covers (as extracted from the formal text):

This standard specifies basic and extended floating-point number formats; add, subtract, multiply, divide, square root, remainder, and compare operations; conversions between integer and floating-point formats; conversions between different floating-point formats; conversions between basic-format floating-point numbers and decimal strings; and floating-point exceptions and their handling, including nonnumbers.

> not IEEE-compliant by default and programmers don''t know how to enable this mode

I''m not a compiler/linker expert, but the DEC Alpha version of VisualC++ 4.2 had a special linker switch that enabled IEEE-complient computations(/FP_IEEE?). I know that AIX (ibm) and Irix (SGI) have similar options. Not everyone knows that, because not everyone is faced with identical computations on multiple heterogenous architectures.

> The statement you''re responding to comes from Patrick Dickinson''s article

The statement "... different software builds can exhibit different behavior on the same hardware due to changes in the storage of floating point values." is particularly dubious when taken at par, but can be interpreted differently {see my comments below}. The statement "It may seem surprising, but different FPUs can produce marginally different results for some calculations, even though all units are IEEE compliant." looks dubious also; maybe they are compliant, but the FPUs are not running in IEEE mode?

I think the author wanted to emphacize that compilers do not generate exactly the same assembly code across architectures and compiler options, and thus the builds'' propency to accumulate error in a C/C++ code sequence can differ. This is certainly true between ''debug'' and ''retail'' builds since compilers generally do not generate the same code, and is a valid basis to discuss integer-based calculations.

Hope this helps.

-cb

This topic is closed to new replies.

Advertisement