Each Camera has its own Rendering Pass... hence it will produce a different Axis length without affecting any other Camera.
Scaling with perspective and then un-scaling is a bit wastefull, how about just using orthographic projection in the first place?
Raydog, I think that you are mis-understanding some concepts of 3d graphics here: The objects that are drawn don't have to have any connection to actual 'things', you can arbitrarily scale or otherwise mess with coordinates as they enter the pipeline however you wish and it only applies to that one rendition (in fact this is the whole point behind shaders).
btw, I do need to use this in a perspective view, so that's probably where the errors are coming from. Every 3d modelling program I know uses something like this, lightwave, 3dsmax, etc... when you select an object.
There could be a precision problem (rounding/truncating) in your calculation of 'd' and so the length may vary. So instead of multiplying the axes length with the distance of the camera from the axes, I multiply it with the zoom factor. The length then remains "rock solid constant"
I don't know what you mean by zoom factor. This is a perspective view. The camera I use has a world space position, and that's what I use to setup the view. I use floating-point precision.
zedzeek, If LENGTHOFVECTOR returns the square root distance, and line_length is a constant, then that's exactly what I'm doing. d *= 0.125f, is just an optimization for d /= 8;