Floats Across Networks

Started by
8 comments, last by DanH 21 years, 9 months ago
I''ve noticed people state that sending floats between machines causes problems. Can anyone explain what specifically the problem is? I''ve been looking at the Quake 2 source and JC seems to be sending floats just fine. The only thing he does is address endianess : void MSG_WriteFloat (sizebuf_t *sb, float f) { union { float f; int l; } dat; dat.f = f; dat.l = LittleLong (dat.l); SZ_Write (sb, &dat.l, 4); } All ''LittleLong()'' is, is a functor to a method that either swaps the endianness or does nothing depending on what the CPU is. So whats the problem?
Advertisement
The problem is just what you said, Big-Endian vs Little-Endian.
Also, there may be machines that don''t use the IEEE format
for floating point numbers... etc.


~~~~
Kami no Itte ga ore ni zettai naru!
神はサイコロを振らない!
As long as we are talking windows only then I asume that there are no problems ?
-------------Ban KalvinB !

Well I think it was fingh who said there was a problem. Irrespective of OS, I don''t see why there is a problem if its just endianness. That Q2 code compiles for linux+solaris too...
Yep. I had a very specific problem with floats.

Floats are typically encoded via some IEEE standard. Reversing the byte order doesn't preserve the encoding, as the encoding is not done on byte boundaries. We're using gcc 2.95 on Solaris, with VC6 on win2k.

Google search for XDR (RFC 1014) and float. You will find the information you need there. It is not a straight "byte-swap" operation in all cases.

As someone asked earlier, yes, if communicating between like systems, don't worry about byte-order at all.


[edited by - fingh on July 17, 2002 9:15:56 PM]
Okay, I did some testing today. Firstly, to be honeset, I've never used a OS besides Linux and Windows or a non-x86 processor for my desktop, and I'm not yet a 'real' professional developer in any way, so I have limited real-world experience (especially with networking across CPU platforms I haven't been able to use). I did this test on a Sun Ultra 60 UltraSPARC II (A big endian system, running Debian 2.2) workstation and my home Athlon system (A little endian system, running Debian Unstable). Here is the output of a 'dump various 32bit floating point numbers to a binary file' test:
Little (Althon):    1024.0f:	00000000 00000000 10000000 01000100   -1024.0f:	00000000 00000000 10000000 11000100     512.0f:	00000000 00000000 00000000 01000100       7.0f:	00000000 00000000 11100000 01000000      13.0f:	00000000 00000000 01010000 01000001     -21.0f:	00000000 00000000 10101000 11000001  101010.0f:	00 49 c5 47 2020202.0f:	50 9b f6 4930303030.0f:	9b 31 e7 4b   1010.20f:	cd 8c 7c 44  20202.88f:	c3 d5 9d 46   30.1495f:	2d 32 f1 41Big (SPARC):    1024.0f:	01000100 10000000 00000000 00000000   -1024.0f:	11000100 10000000 00000000 00000000     512.0f:	01000100 00000000 00000000 00000000       7.0f:	01000000 11100000 00000000 00000000      13.0f:	01000001 01010000 00000000 00000000     -21.0f:	11000001 10101000 00000000 00000000  101010.0f:	47 c5 49 00 2020202.0f:	49 f6 9b 5030303030.0f:	4b e7 31 9b   1010.20f:	44 7c 8c cd  20202.88f:	46 9d d5 c3   30.1495f:	41 f1 32 2d 

To me, this looks to be a simple endian "byte-swap" situation. Can you give me any numbers, data types (instead of float), or CPU's that would exhibit differing behaviour? I tried to do number that would alter the three information fields of floats as much as possible, but I may have missed something obvious, of course. Thanks for your time.



[edited by - Null and Void on July 18, 2002 3:44:39 AM]

hmm I''m still thinking that your issue may have been something else fingh...

The results above suggest it is a simple endian byte ordering issue, as does the q2 code which we know to cross compile successfully.

In terms of the encoding not being done on byte boundaries, I don''t think this would impact the value as to the native CPU, endianess is irrelevant - its all about the order of the 4 bytes that make up the float, the bits aren''t reversed. (For those not aware of how 32 bit IEEE works, you have a sign bit, an 8 bit exponent and a 23 bit mantissa).

The only issue I know of with floats across machines is that some FPUs end up with slightly different results, so if you were deriving information from the floats locally (perhaps for prediction etc) you could end up with different answers.



The problem occurred and was solved many months ago, I don''t remember the exact solution. Our current project doesn''t need to send floats across platforms, so it is no longer an issue. Thanks for the input guys.

I was wrong however, in that you can byte swap floats correctly in most situations (see the previously referenced documents for exclusions). The precision issue still remains however, and for some applications it isn''t trivial, nor is it negligible.
In some cases, 4.999 translated to 4.9895 is okay. Some cases it is not. I''d concede that in most games, it would in fact be negligible (but not in bio-medical applications).






Well it wasn''t really a case of trying to prove you wrong fingh, I just wanted to get to the bottom of the issue, and I think between us all we''ve managed it. Good effort all
No worries DanH, I don''t take it that way at all. I make mistakes too, and those are part of what experience is all about. If I think I might be wrong, I write a test app. In this case it proved that I was wrong (yes, I wrote my own Win2k-Solaris, no offense to Null+Void, but Solaris is much different than Linux). In my work, I typically handle server-to-server communications, and we never have need to send floats. Someone programming a 3D client would probably have a different experience.

Discussion is a good thing... thanks.

This topic is closed to new replies.

Advertisement