Huge world size

Started by
27 comments, last by RichardS 19 years, 1 month ago
Quote:
but its *such* hard going ;)
...
Moving the code over is proving to be one hell of a refactoring session!! :(


So you see, you should not avoid this dilemna :

- "increase my knowledge" : take a bit more time to understand the inner format of the floating points which is pretty well standardized. Then possibly only add a few translations here and there in your code, at some strategic points. It's really not that hard to understand.

Imagine that I discovered it by myself a long time ago, wondering how floats looked like in memory. I found fantastic opportunities on how to exploit it, exciting perspectives in the times of software rendering. So with some docs on the www, that should be ez. Or else as I did, learn empirically, make a small C console program to watch how the integer representation of the floats behaves (use a bitfield) when you + * / etc ... And when you understand such things as the "Carmack" rsqrt tricks, you are done with it, and you'll find many opportunities to exploit that many times in the future ;)

- "let's do it now" : refactor your code the way you described it here.

which do you think will take the least time for you ? I can not answer for you. But since you wrote a "hell" ...
"Coding math tricks in asm is more fun than Java"
Advertisement
@Charles B:
Why offset coordinates? I can't see how it reduces errors.

As far as i can tell it only reduces error differences.
Ooh cosmic karma... I bought an excellent book called "Realtime Collision Detection" the other day. And look who should turn up and help me out ;)

*ahem* anyway, just curious for an answer to Trap's last question... *bump* in other words...
Quote:Original post by Trap
@Charles B:
Why offset coordinates? I can't see how it reduces errors.

You are right it does not necessarilly reduce absolute metric errors. It will increase them in most cases.

Quote:
As far as i can tell it only reduces error differences.

But it will make them coherent, and decrease noticeable errors. That is as you rightly pointed the differences of accuracy that can bring nasty effects :
- graphical artefacts (black pixels)
- jerking vertices at short ranges.
- non reproductible physical behaviours.

This may sound like clutching at straws, but first, it's not any offset. It's a very precisely defined bias. Something like 2^n or 1.5*2^n. If the solution that mixes integers (or fixed points) for world coords and floating points for vectors works, then biasing the floats achieves exactly the same result, with more convenience and perfs because only floats are used.

So it's more generally a question of discretizing the coordinates of the static decor objects and avoid cracks at their junctions.
This reduces the number of low order bits and thus reduces dropped bits, that is precision losses.

Next the bias also constraints to more stable metric accuracies during the intermediate flops of the 3D transfos. Thus less spatial discontinuities (jerks, black pixels, etc...). But that's only evocated in the great lines.

So now, if one still wants to see in details what discretization brings, which artefacts it anihilates, in which conditions, one has to know it requires a very precise and tedious math demo, and many technological considerations to understand it fully. I could probably do it here if someone wants to go deeper on this track. I did refresh my my memory on paper, with graphics, matrices, comments on 3D APIs, etc.... So I am a bit too lazy to translate in ASCII and type all that stuff unless it really has some interest for someone.

The best I can say is first experiment the idea. Empirically see if produces some effects. It's a minimal reingeneering job for such a test. Use a macro, set the bias to 0 to get back and compare to the original version (?). Then if required understand the theory and tricks (come back here) much deeper to make the solution robust.
"Coding math tricks in asm is more fun than Java"
Quote:Original post by Charles B
If the solution that mixes integers (or fixed points) for world coords and floating points for vectors works, then biasing the floats achieves exactly the same result, with more convenience and perfs because only floats are used.


Nonsense. The range and precision of the two systems are greatly different, so biasing of floats most certainly does not achieve exactly the same result.

Quote:Original post by Christer Ericson
Quote:Original post by Charles B
If the solution that mixes integers (or fixed points) for world coords and floating points for vectors works, then biasing the floats achieves exactly the same result, with more convenience and perfs because only floats are used.


Nonsense. The range and precision of the two systems are greatly different, so biasing of floats most certainly does not achieve exactly the same result.

I meant algorithmically, not in terms of absolute metric range/precision. My sentence could be confusing, I admit. Of course 32 bits biased floats have a 23 bits mantissa. But you can also use 64 bits biased floats that is 52bits of mantissa, far enough to cover the needs of any planetary God in a 3D game. Doubles and floats work hands in hands in the CPU and with the GPU, whereas 64 bits integers don't, require conversions, if you want to compare comprable things.

I mentionned all these things numerous times in this thread.

What limitates is usually not the metric range/precision of the world coords. If your 3D API manipulates 32bits floating points internally, using 64 bits integers changes nothing. In anyway, at one moment your 64bits integers will be converted to floating points. Either you do some relative to camera stuff yourself (fine but slower) and reduce the number of bits or else your input coords will in fact be rounded (the 12 lower bits dropped) to 52bits of mantissa doubles (or even 23 bits mantissa floats). At the very best you can hope that the 3D API takes 64bits integers and the transfo are well coded so that intermediates fit in 80bits floating point registers. Then there is really no significant loss. But there is no easy way to be sure of it, to my knowledge. So what's the point in using 64 bits integers since it will be slower, clumsier to code and not more accurate than doubles ? Ah they are discretized (of homogenous precision) ! OK here we are.

I hope you followed me in the technical details here. Or maybe I missed something about the 64bits integer solution.


And then what counts is the variation in magnitude of the coords. If you mix near zero coords and bigger coords you'll have some problems. If you cross various 2^n boundaries, you get different precision losses. For instance if two objects follow different transfo paths, common vertices (junctions) might collapse and create visible cracks (black pixels).

So back to 32bits biased floats, it's as if one used 23 bits integers. And 23 bits are enough to map 8kms with 1mm precision, or 80kms for 1cm precision. Which already starts to make a big scene, and enough precision for terrain vertices.

By claiming non sense so quickly, I am not certain you fully understood what the fact that coords are discretized potentially brings in terms of numerical and graphical stability.

To sum it all up : "discretizing is forcing to less precision to render more accurately". ;)

Just in case I repeat :
"This reduces the number of low order bits and thus reduces dropped bits, that is precision losses."

I also said I can detail the issue in many contexts (whether one uses local coords or monolithic world coords, depending on the 3D hardware or driver implementation, etc...), if anyone is interested. I am less interested loosing time into nonsense polemics or qui proquos (partially my fault then). I hope it's not all nonsense ;)

[Edited by - Charles B on February 23, 2005 8:01:59 AM]
"Coding math tricks in asm is more fun than Java"
Quote:Original post by Charles B
To sum it all up : "discretizing is forcing to less precision to render more accurately". ;)


Do you mean here that although the absolute positions may be less accurate, they will all be subject to the SAME innaccuracy (direction & magnitude of quantisation), hence removing rendering inaccuracies otherwise seen (cracks, shaking etc)?
Exactly, but under certain conditions, it would be tedious to go into the details of what goes on exactly in the rendering pipeline. It can help, but it depends on how you define the transformation groups.

Still I maintain this solution at least replaces the int64 solution with more convenience and efficiency, except that you have a range over precision of 2^52 (for doubles) instead of 2^64 for int64s. But 52 bits are far enough for typical huge scenes.

[Edited by - Charles B on February 23, 2005 8:26:49 AM]
"Coding math tricks in asm is more fun than Java"
I've been following this thread with interest, as I'm running into the similar problems (at very fine detail in a large world, geometry jumps around as the camera moves). I really had no working knowledge of floats, but I've been working my way through the "What Every Computer Scientist Needs..." paper.

I'd like to clarify that he precision of floats/doubles increases as the value becomes closer to 0 (because half of the exponent bits represent negative values). Correct?

I'm working on a full-Earth renderer. Right now, I'm handling vertex positions and texture coordinates differently. Vertex positions passed to the GPU in normalized vectors, and the height is determined within the shader. Right now I'm using the scale of 1 GL unit == 1km (so most vertex positions usually end up with a magnitude of roughly 6000). Because of the precision issue mentioned above, should I expect more precise rendering if I try to keep all of the geometry within 1 unit of the origin throughout the rendering pipeline?

I'm also thinking of trying a technique I saw elsewhere, and briefly mentioned in this thread:
The 3 corner vertices of LOD chunks are stored in double precision, and the vertices internal to the chunk are stored as barycentric coords (in floats), and the final position is computed on the GPU. This would allow entire chunks to be drawn in the camera's coordinate space by subtracting only the 3 corner vertices from the camera position. As I understand it, the advantage of this approach is that the translation from object->camera space happens in double precision rather than single, but that doesn't account for any rotation between the two coordinate spaces. What's the big advantage here?

This topic is closed to new replies.

Advertisement