Sign in to follow this  
fosh

OpenGL GLdouble vs double

Recommended Posts

So i want to use my generic libraries in an opengl program, for vectors and matrices and what not. Thing is, all doubles are declared 'double', not GLdouble, and it spams me with warnings whenever i dont typecast when calling gl functions and passing doubles, not GLdoubles. Typecasting everywhere is ugly. Is there any way to stop the compiler bitching about this, considering GLdoubles just seem to be typedef'd doubles? cheers, Scott

Share this post


Link to post
Share on other sites
the compiler might have an option to switch off that warning, however that means you'll lose warning for ALL conversions like that.

Also, as a side point, doubles arent the best format to feed the graphics card, they much prefer floats and you could be trashing your performance in the long run.

Share this post


Link to post
Share on other sites
:O
seriously?
is that because the cards only do floating point in hardware, and have to split doubles into a couple of operations or something?

Share this post


Link to post
Share on other sites
Quote:
Original post by fosh
:O
seriously?
is that because the cards only do floating point in hardware, and have to split doubles into a couple of operations or something?


yep, they do only act on floating point in hardware and have to convert doubles down to floats (64bit down to 32bit iirc) to use them. So you are using twice the space and probably losig performance.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this