GLdouble vs double

Started by
5 comments, last by _the_phantom_ 19 years, 2 months ago
So i want to use my generic libraries in an opengl program, for vectors and matrices and what not. Thing is, all doubles are declared 'double', not GLdouble, and it spams me with warnings whenever i dont typecast when calling gl functions and passing doubles, not GLdoubles. Typecasting everywhere is ugly. Is there any way to stop the compiler bitching about this, considering GLdoubles just seem to be typedef'd doubles? cheers, Scott
Advertisement
the compiler might have an option to switch off that warning, however that means you'll lose warning for ALL conversions like that.

Also, as a side point, doubles arent the best format to feed the graphics card, they much prefer floats and you could be trashing your performance in the long run.
:O
seriously?
is that because the cards only do floating point in hardware, and have to split doubles into a couple of operations or something?
#pragma warning(disable:4251)

4251 might not be the number you should write there.
4244 is the number for vc++7

Quote:Original post by fosh
:O
seriously?
is that because the cards only do floating point in hardware, and have to split doubles into a couple of operations or something?


yep, they do only act on floating point in hardware and have to convert doubles down to floats (64bit down to 32bit iirc) to use them. So you are using twice the space and probably losig performance.

This topic is closed to new replies.

Advertisement