GLdoubles vs. GLfloats
Hey, if I''m running on a P4 system with a Geforce4MX, do I take a performance hit if I do all my OpenGL rendering using GLdoubles instead of GLfloats? The obvious answer used to be yes, but I thought operations on doubles on the P4 are actually faster than on floats because of the way that the data is shuffled around inside the processor.
Is there any advantage in using GLdoubles for things like matrix calculations instead of GLfloats?
I don''t see why you''d want to use doubles for graphics code. I''m pretty sure (almost 100%) that the data that actually gets submitted to the graphics card is converted to single-floats, short ints and bytes, so the only advantage you''re seeing is (at best) matrix concatenation, because the vertices get multiplied with the modelview*projection matrix on the GPU nowadays, and that is using single-precision floats.
However, if you are multiplying many matrices together, and suffer from lack of precision, multiply the matrices together, convert the result to a single-precision-float-based matrix and upload that to the card.
Even if you''re not doing vertex transformation in hardware (e.g. custom-written vertex shaders; the ones in the NVidia driver use singles as far as I know), there''s not an awful amount of point in using doubles, as the precision gain is absolutely minimal, and once again, you need to cast to single precision anyway.
- JQ
However, if you are multiplying many matrices together, and suffer from lack of precision, multiply the matrices together, convert the result to a single-precision-float-based matrix and upload that to the card.
Even if you''re not doing vertex transformation in hardware (e.g. custom-written vertex shaders; the ones in the NVidia driver use singles as far as I know), there''s not an awful amount of point in using doubles, as the precision gain is absolutely minimal, and once again, you need to cast to single precision anyway.
- JQ
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement