• Advertisement
Sign in to follow this  

Would a 2 byte float ever be worth it?

This topic is 4786 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am told that the minimum packet size for udp is 64 bytes of data. It is my reasoning that if you decide to send 2 byte floats instead of the regular 4 byte floats you will have more room to send other data later on as you add it. Using data types like short and making special classes for 2 byte floats for sending data over a network seems pretty worth it to me. Even thought the packet may be padded if you decided to send a single float what if you send 100? 64/4 = 16 4 byte floats can fit in the minum packet 64/2 = 32 2 byte floats can fit in the minum packet What do you guys think?

Share this post


Link to post
Share on other sites
Advertisement
Modern graphics cards do have a 'half' type which is 16 bits (1 bit sign, 5 bits exponent, 10 bits mantissa) for use in shaders.

Granted, it has nothing to do with networking (aside from the whole data transfer issue), but it just shows that is has been done. [smile]

I guess it essentially depends on whether you can afford the loss in precision or not.

Share this post


Link to post
Share on other sites
Hehe, i dunno if itd be worth it for that, unless you ARE sending 100 floats, i doubt youd really NEED to use a float16, especially not for pac man :-D, and theres always the loss of precision to consider too

edit: rawr fruny beat me

anywho, hope that helps
-Dan

Share this post


Link to post
Share on other sites
It'd be possible, reasonably easy too.



Float Fl
int* fli
fli = &fl
Bool Sign = fli & 0x80000000
int Exp = fli & 0x7F800000
int Mant = fli & 0x007FFFFF


That turns the float, into three variables, the sign, the Exponent and the mantissa.

Now, to turn this into a two byte float, you would to the following things.


float16 Newfloat
int* fli
fli = &newfloat

fli |= sign << 15 //Put the sign bit on the msb.
fli |= (Exp & 0x78)
fli |= (mant & 0x000003FF)


Note - If the float is bigger then the lower byte of the float allows, bad things happen. You also loose accurisy, as the exponent gets cut.

Edited - I think this is up to specs. Can anybody conferm/deny that for me?
From,
Nice coder

[Edited by - Nice Coder on January 8, 2005 12:14:46 AM]

Share this post


Link to post
Share on other sites
There is no minimum packet size for UDP.

However, it's perfectly valid to bit-pack your data to the precision you need. However, you're likely better off using fixed point, than reduced-precision floating point. Floating point is annoying, in that it's much less precise towards the ends of precision, than towards the center, in absolute terms.

For example, if your world is 1000 meters across, you could spend 20 bits on each of X and Z components, and if the height range is 200 meters, and spend 18 bits on Y. You would make each step of that 20-bit value be 1000/(1<<20), which would be the basic precision of your position packets. Use one of the many bit-stream classes available on the web for marshalling partial-byte data.

Share this post


Link to post
Share on other sites
@Nice coder: that won't work at all. You're dropping the most important bits of the mantissa. Also, the 16-bit float format is defined as 1 sign, 5 exponent, and 10 mantissa. NVIDIA has classes to converte between half and float on their web site (and it's part of the OpenEXR standard, too).

Also, the code you posted won't deal with denormal numbers correctly, either.

Share this post


Link to post
Share on other sites
Sorry, i didn't look up the specs for 16 bit floating point numbers. I basically didn't know they exist, i use only 32 and 64 bit numbers.

I was taking the only bytes which i could take, if i was just to take the msb the thing wouldn't work well. at least now, it works for numbers under 2^256.

Also, what are denormal numbers? Denormalised numbers?
I was assuming the floats it would be taking were normalised.

Thanks for the info hplus0603.
From,
Nice coder

Share this post


Link to post
Share on other sites
wtf?!

I remember

float 16 bits
double 32 bits
long double 64 bits (warning slow, not supported in hardware)

and i just found out that .net considers char a 2 byte size. What is the world coming to! Just cuz the space is there doesnt mean that we should use it so judiciously if the easy optimizations are so accessable, such as using only the memory that you need.

Share this post


Link to post
Share on other sites
Quote:
Original post by pTymN
wtf?!

I remember

float 16 bits
double 32 bits
long double 64 bits (warning slow, not supported in hardware)

and i just found out that .net considers char a 2 byte size. What is the world coming to! Just cuz the space is there doesnt mean that we should use it so judiciously if the easy optimizations are so accessable, such as using only the memory that you need.


It's been a long long while now that you have had to use specific typedefs to force a certain size. It goes beyond this even you see, those variables aren't even guaranteed to be the <same> size on various compilers or even over the same compiler.

Share this post


Link to post
Share on other sites
Quote:
Original post by pTymN
wtf?!

I remember

float 16 bits
double 32 bits
long double 64 bits (warning slow, not supported in hardware)

and i just found out that .net considers char a 2 byte size. What is the world coming to! Just cuz the space is there doesnt mean that we should use it so judiciously if the easy optimizations are so accessable, such as using only the memory that you need.
Nope,
float 32 bits
double 64 bits
long double 80 or 96 bits? (not sure, I've never needed them)
char = 8bits, unless maybe your project is set to unicode or something.

But hey, don't take my word for it. sizeof is your friend!

I too would be interested in a good 16-bit float class btw.

Share this post


Link to post
Share on other sites
Quote:
and i just found out that .net considers char a 2 byte size. What is the world coming to!


16-bit characters are to support Unicode. What the world is coming to is the realization that there are other languages than English and you can can miss out on a lot of revenue if you ignore that.

If you want a byte then use a byte type. The equivalence between chars and bytes is a C/C++ thing that likely won't be true in future languages.

Share this post


Link to post
Share on other sites
just do this

float i = 4096.345232f

int s = i*2^17 = 536.916.162

i = s /2^17 = 4096.345230f

you see hardly a loss of precision who cares about 0.000002

if your player for example is 72 units high float height = 72.0f

would you care if his position varies 0.01f ?

i wouldn t

s = i*2^7 = 524.332

i = s/2^7 = 4096.34375 is a difference of 0.002 who cares about such a less of precision since you update the player position by the server all over again anyways

so on a +-4096 units map you only need 19 bits to represent a float
or 20

for angles id only send the part infront of the comma no need to wast a lot of bandwidth since angles change pretty fast so precision doesn t matter




Share this post


Link to post
Share on other sites
do you mind explaining a little bit how that works, Basisor? 4096 seems like a pretty small map though. also, an int is 4 bytes as well, did you mean to make it a short?

thanks for any help.

Share this post


Link to post
Share on other sites
no all i do is i multiply by 2^7 to get atleast 0.01f precision


and yes +-4096 are small maps as in most quake engines
you could of course increase it by adding another few bits


and no i don t mean convertion to short or what ever

i just use as many bits as i need to represent the information

you don t have to use 16 bits to send 256 over the net

i guess you understood how i saved the first 2 decimals

just take a calculator and try it

Share this post


Link to post
Share on other sites
> I am told that the minimum packet size for udp is 64 bytes of data.

It's more around 56 bytes. But you are talking here about a full UDP/IP frame's overhead (header and trailer bits, source and destination IP & ports, CRC checksum, etc), not the actual data payload an application can play with. There is no minimum payload and the maximum is 64K.

> It is my reasoning that if you decide to send 2 byte
> floats instead of the regular 4 byte floats ...

In theory the more data you send in a single frame's payload, the more efficient the communication link is simply because the frame overhead becomes small wrt the payload. In the real world, routers will chop large UDP frames into 1400 bytes chunks, thus rising the risk of losing data; if one chunk goes missing, the NIC stack dumps the entire UDP packet as if it was never sent. So there is a "router-friendly" maximum packet size - or MTU - of about 1400 bytes.

Regardless of the frame overhead, in a game you always want to *minimize* the amount of data you send even if this is not efficient. Smaller packets are faster to transmit (less clocks) and thus give a better response rate for the game interaction.

-cb

Share this post


Link to post
Share on other sites
Basisor, i guess i just dont understand exactly how this all works. i mean, the whole XOR thing confuses me really.

also, if your not using all the bits in that integer, how do you convert the integer to something that is N bits big, for sending over the wire?

btw: i thought a UDP packet minimum was 28 bytes?

Share this post


Link to post
Share on other sites
Basisor used "^" to mean "power", not XOR.

However, I don't understand what he wants to show. A 16-bit floating-point number has 10 bits of mantissa, so at size +/- 4000 units, the precision would be +/1 2 units -- certainly way too little precision to use.

Share this post


Link to post
Share on other sites
Quote:
Original post by graveyard filla
Basisor, i guess i just dont understand exactly how this all works. i mean, the whole XOR thing confuses me really.

also, if your not using all the bits in that integer, how do you convert the integer to something that is N bits big, for sending over the wire?

btw: i thought a UDP packet minimum was 28 bytes?



as hplus said i use ^ as power not as XOR

well the server and the client define a format for a vector


lets say

DELTA_FLOAT_VECTOR (7,N)

where N is the number of bits the integer uses and 7 is the exponent which you use to multiply a float with

2^7 = 128

so you multiply a float with 128 and move the decimal 2 positions to the right

ab,cd*128 = ab,cd*100 + ab,cd*28 >abcd,00

so on the server and on the client you know the number of bits of the integer and the exponent to use to convert the abcd->ab,cd *rounding error: loss of precision*

HL1 does this btw

now i think you wanted to know how to pack a 20 integer into a buffer right?

ok i have a gamestate class which sums up all the variables i send and get the number of bits needed to create a buffer

iBits = getBitsNeeded();
iBytes = iBits / 8;
if(iSpace = iBits%8){//modulo 8
++iBytes; //add one byte to reach >= iBits size
}

ok now we have got a buffer large enought to write the variables to
now you need a routine to write x bits of a variable to the buffer

i don t have my code on this computer now, but i think it shouldn t be too difficult to figure out the rest on your own


my routine is approximately 8-10 lines of code but still unoptimized

if you wish ill post it here but you should try it on your own its a nice practise :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement