# Intergalactic coordinate system

## Recommended Posts

rphillips    100

##### Share on other sites
Emergent    982
A long long is 64 bits. Your cube is then 2^64 light years on a side (that was the requirement you expressed, no?). A light year is just under 10^16 = 2^{16/log_10(2)} ~= 2^53 meters. Hence your cube is a little under 2^(64+53) = 2^117 meters on a side. Let's round 117 up to 128 and represent all your coordinates by 128-bit integers (which can be implemented as a class containing two long longs and having some overloaded operators). Then your possible positions will be a lattice with spacing between points of under half a millimeter.

##### Share on other sites
rphillips    100
Emergent,

thanks for replying. Originally, I did want to use 128 bit integers to represent the information as it is quite straightforward. I'll add that to the pool of things Im going to try and see which of the methods works out the best.

Thanks again for taking the time to reply.

##### Share on other sites
Emergent    982

1.) Re.: 'not entirely clear on how you came to the conclusion that I would have "half milimetre" resolution':

I worked out above that your cube is roughly 2^117 meters on a side. So, it could be represented to a meter resolution with 117 bits per coordinate. If we instead use 128 bits, we have an extra 11 bits. This further subdivides each meter interval into 2^(-11)--meter intervals. 2^(-11) = 0.00048828125, which is just under 0.5*10^(-3) meters -- half a millimeter.

(EDIT: This is after a lot of rounding on my part, but I was always conservative with my rounding, so really treat the 0.5mm as an upper bound on the distance between grid points; you'll in fact do even better than this.)

2.) Re.: 'Roll my own [128-bit datatype]'

Basically you'd just implement the "gradeschool algorithms" for arithmetic. For instance, for addition, you'd add the lower ("least significant") bits, and carry any overflow to the upper bits, which you subsequently add... I'm not sure what reference to refer you to (I guess I learned this as a side effect of electronics courses?); maybe someone else can recommend something simple and to-the-point... I bet you that this is something you can figure out with a pencil and paper on your own though... (I may also have some example code kicking around I can post that you can learn from, though it's not specialized to the 128-bit case...)

[Edited by - Emergent on March 22, 2010 3:01:08 PM]

##### Share on other sites
Aressera    2919
Another option would be to use integer coordinates (32 or 64 bit) for "grid cell location" plus 32 bit floats for a local coordinate system. This might be faster than using large integer classes and would provide reasonable precision if you keep your grid cell size relatively small (~1km). It would require some logic overhead to keep track of the grid cell coordinates. Basically, if an object exceeds some distance from the origin along an axis, either add or subtract from the local coordinate position and subtract or add to the grid coordinate position.

Another advantage of this system is that you can keep your rendering data as 32 bit floats, though you would have to split objects into chunks smaller than the grid size or risk losing precision. Doing small-scale physics simulation is also made easier; most physics libraries use 32 bit floats internally.

If you used doubles instead for the local frame, your grid cells could be much larger, reducing the need for complex object splitting.

If you're looking to model the galaxy, you could use 64 bit grid coordinates and 64 bit floats for a total of 117 bits of precision (64 + 53 bit mantissa).

However, it might just be simpler in the long run to use a fixed point arithmetic scheme like Emergent described.

##### Share on other sites
Emergent    982
A quick google turns up the MUNTL library, which looks like it probably does what you want. I haven't used it myself, but I get the impression that to use it you define integer sizes at compile time with template arguments, and that seems like a good fit for your case.

##### Share on other sites
rphillips    100
Quote:
 Original post by EmergentRe. your PM...1.) Re.: 'not entirely clear on how you came to the conclusion that I would have "half milimetre" resolution':I worked out above that your cube is roughly 2^117 meters on a side. So, it could be represented to a meter resolution with 117 bits per coordinate. If we instead use 128 bits, we have an extra 11 bits. This further subdivides each meter interval into 2^(-11)--meter intervals. 2^(-11) = 0.00048828125, which is just under 0.5*10^(-3) meters -- half a millimeter.(EDIT: This is after a lot of rounding on my part, but I was always conservative with my rounding, so really treat the 0.5mm as an upper bound on the distance between grid points; you'll in fact do even better than this.)

Yep, thats pretty much what I thought you were implying, but I didnt want to assume. Thanks for clarifying and yes, thats the ideal. I had (again) intended to include a short integer to provide the sub=metre accuracy, but your way is more elegant.

Quote:
 2.) Re.: 'Roll my own [128-bit datatype]'Basically you'd just implement the "gradeschool algorithms" for arithmetic. For instance, for addition, you'd add the lower ("least significant") bits, and carry any overflow to the upper bits, which you subsequently add... I'm not sure what reference to refer you to (I guess I learned this as a side effect of electronics courses?); maybe someone else can recommend something simple and to-the-point... I bet you that this is something you can figure out with a pencil and paper on your own though... (I may also have some example code kicking around I can post that you can learn from, though it's not specialized to the 128-bit case...)

Yup, I get where you're going. Its referred to in a lot of the papers Ive studied over the last few days as "gradeschool math" or "school math". Basically, the simple algorithms go:

1234
+5678
------

and then most of them split numbers up into either chunks like:

12 34
56 78

or reverse one of them (Im typing this from memory so that might not be right, Id have to check my notes - of which Im accumulating quite a pile) and then do some bit rotation.

A lot of them (especially the arbitrary math libs) have x number of elements in an array or linked list to represent the actual components of the type and then some other temporaries. Thats fine for arbitrary precision math, because of course the designer has no idea of the complexity of what the end user is going to do.
However, I know that 128 bit integers are as high as I need (for now - Im not completely stupid, Im going to use 2 system wide constants through the internals of the code, WORD_SIZE and MAX_INT for future portability) so it is quite frustrating that the solution I need is les complex than most of the math libs provide, but just out of reach of my current compilers >..<

I get how the basic algorithms go. The more difficult ones, well, I guess thats down to elbow-grease and patience. I'll get there :)

Ive done a fair amount of research over the weekend (long weekend for me, happy days!) and Ive covered the following topics:

1- Knuth (read relevant chapters from TAOCP and other books)

2- Examined a number of "BigInt" libraries and tools from various people

3- Looked at code from OpenSSL

4 - Looked at code from tomcrypt

5 - Examined "Bernstein" (using inline assembler for optimisation) - ** side note as I have control over the hardware, assembly optimisations might not be a bad idea, as an add+carry can be performed directly on two 64 bit integers)

6 - Karatsuba algorithm

7 - Using the __int128_t type in gcc (as they are all 64 bit boxes) - i can declare them, assign them (using hex only for some reason) but not display them or anything, so Im in some doubt as to how far along the __int128 implementation is in gcc, other than to say it appears to be only a "built-in" type in the compiler itself.

8 - GMP (far too much other stuff in there that I dont need and would pose a portability issue if I comitted to it.

9 - NTL - nice, but is too transparent. - again, would pose a portability issue.

At some point, 128 bit integers will make it into mainstream use and until then, I need to roll my own I guess. With that in mind, Im hoping to abstract it out enough that what is presented to the coder is a datatype. No aunties drawers or "write this as a string" nonsense.

That way - hopefully - I will be able to simply remove my custom type, leaving access to a native one.

Or not. and I'll lose my remaining hair (all 3 strands of it!)

RP

##### Share on other sites
rphillips    100
Quote:
 Original post by AresseraAnother option would be to use integer coordinates (32 or 64 bit) for "grid cell location" plus 32 bit floats for a local coordinate system. This might be faster than using large integer classes and would provide reasonable precision if you keep your grid cell size relatively small (~1km). It would require some logic overhead to keep track of the grid cell coordinates. Basically, if an object exceeds some distance from the origin along an axis, either add or subtract from the local coordinate position and subtract or add to the grid coordinate position.Another advantage of this system is that you can keep your rendering data as 32 bit floats, though you would have to split objects into chunks smaller than the grid size or risk losing precision. Doing small-scale physics simulation is also made easier; most physics libraries use 32 bit floats internally.If you used doubles instead for the local frame, your grid cells could be much larger, reducing the need for complex object splitting.If you're looking to model the galaxy, you could use 64 bit grid coordinates and 64 bit floats for a total of 117 bits of precision (64 + 53 bit mantissa).However, it might just be simpler in the long run to use a fixed point arithmetic scheme like Emergent described.

Thanks for taking the time to reply.

If Im reading right then what you're suggesting is splitting up the universe into zones (grid cells). I have designed the server-side so that the client handlers already use a dynamic resolution system: their scope (the area of the universe they cover) and - ultimately - the number of clients they handle is adjusted according to feedback from the load controllers. As this happens at the server-side management level I dont think it would cause any problems with another zoning scheme in place, but you are right when you say it would add some logic overhead, even if only at the management layer.

The rendering system is local to the game clients. Clients will not recieve absolute coordinates in the form we're discussing here unless they are within terrestrial bounds (which is another story altogether). They do get coordinates, but they are for display purposes only and presented in a watered-down user friendly way (GLY-MLY-LY-KM-M whilst in space and xxx,yyy,zzz whilst within 15,000m of a celestial body) used primarily for their own navigation and mapping tools. Galaxy names and numbers are also used for this purpose but not for internal positioning.

Whilst in space, the game clients are only fed updates on positions of objects that are within visible range (in the case of celestial bodies - this is a significant distance). But the coordinate system used by the game client is "local" interpretation of a _sector_ produced server-side specifically relevant to that client (a simple case of a bounding box defined only as short integers of the form x,y and z).

##### Share on other sites
JohnnyCode    1046
Quote:
 1) "Static" (non-expanding) Universe, measuring n x n x n Light years, where n is almostthe absolute limit of the long long datatype.2) Numerous celestial bodiesData:1) There are 1000m in a kilometre.2) There are 9 trillion km in a light year

Better would be to create elementary cube as double*double*dounle where 1.0 equals to a meter to relate to your space.

I think that manualy writing constants, 9tril*1000m, will vaste precision or scale of scalar datatypes you already use for elementary cube vectors.

The nxnxn global vector should be double*double*double too, where 0.5 equals
half a elementary cube edge, thats max double.

So vector (10.5,10.5,10.5) is a toprightfront corner of 10*10*10 cube. The mantis part adresses the cube edge. So the global uniferse position of the vector is

(10.5,10.5,10.5)*2*halfcubeedge.

This should equal to vector (0.5,0.5,0.5)*doublemax always (linear algebra axiom).
Your actual transform logic can happen only in double*double*double space anyway. Thus (10.0,10.0,10.0)*halfcubeedge vector maps to (0,0,0) of your local cube vector, what is what you want.

You get global presision (lowestabsolutedoublevalue)*doublemax of a meter.
I would make the cubes a float edge to have better precision, yes cube is smaller then but you have maxdouble of them .

Vector I have mentioned,(10.5,10.5,10.5) is a corner of tenth cube, but how do you transform (10.6,10.6,10.6) vector to local space of tenth cube(the one the player is in)? Solution is to make 0.5 eaual to maxdouble/2, this way you can transform any reasonable neighbouring vector from different cube to local space of neighbour the player is in.

(10.5,10.5,10.5) maps to local vector (0.5,0.5,0.5)*cubedge
(10.6,10.6,10.6) maps to local vector (0.6,0.6,0.6)*cubedge
correct scene around player anywhere (considering his volume of scene is smaller than doublemax/2*doublemax/2*doublemax/2).

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account