GLdoubles vs. GLfloats

Started by
2 comments, last by Telamon 20 years, 5 months ago
Hey, if I''m running on a P4 system with a Geforce4MX, do I take a performance hit if I do all my OpenGL rendering using GLdoubles instead of GLfloats? The obvious answer used to be yes, but I thought operations on doubles on the P4 are actually faster than on floats because of the way that the data is shuffled around inside the processor. Is there any advantage in using GLdoubles for things like matrix calculations instead of GLfloats?

Shedletsky's Bits: A Blog | ROBLOX | Twitter
Time held me green and dying
Though I sang in my chains like the sea...

Advertisement
Yep, you are getting 23.7 fps more
I don''t see why you''d want to use doubles for graphics code. I''m pretty sure (almost 100%) that the data that actually gets submitted to the graphics card is converted to single-floats, short ints and bytes, so the only advantage you''re seeing is (at best) matrix concatenation, because the vertices get multiplied with the modelview*projection matrix on the GPU nowadays, and that is using single-precision floats.
However, if you are multiplying many matrices together, and suffer from lack of precision, multiply the matrices together, convert the result to a single-precision-float-based matrix and upload that to the card.
Even if you''re not doing vertex transformation in hardware (e.g. custom-written vertex shaders; the ones in the NVidia driver use singles as far as I know), there''s not an awful amount of point in using doubles, as the precision gain is absolutely minimal, and once again, you need to cast to single precision anyway.

- JQ
~phil
Thanks, that makes a lot of sense.

Shedletsky's Bits: A Blog | ROBLOX | Twitter
Time held me green and dying
Though I sang in my chains like the sea...

This topic is closed to new replies.

Advertisement