sense of GL variable types?
hello,
i was looking at lots of source codes for OpenGL programs and was wondering what is the advantage of using GLvoid, GLuint or GLfloat instead of void, uint, or float?
and also is there an advantage if i write:
glScalef(2.0f,2.0f,2.0f);
instead of
glScalef(2,2,2);
thanks!
they probably did it that way in case the built-in variable types change (i.e. int is 32-bits today, but it might become 64 bits when 64-bit processors become commonplace), or for systems that don''t have types of the same size.
Yes they do the typedefs for portability.
As for the second question:
No there is not an advantage to writing glScalef(2.0f, 2.0f, 2.0f) as opposed to glScalef(2,2,2);
This is because the compiler knows the function takes float args, and will realize your integer literal is a float. This assumes your compiler is not TOTALLY retarded.
Personally, I prefer to put glScale(2.0f, 2.0f, 2.0f) but it is not actually an advantage in terms of speed or what not.
Jeff
[edited by - duke on January 22, 2004 7:54:03 PM]
As for the second question:
No there is not an advantage to writing glScalef(2.0f, 2.0f, 2.0f) as opposed to glScalef(2,2,2);
This is because the compiler knows the function takes float args, and will realize your integer literal is a float. This assumes your compiler is not TOTALLY retarded.
Personally, I prefer to put glScale(2.0f, 2.0f, 2.0f) but it is not actually an advantage in terms of speed or what not.
Jeff
[edited by - duke on January 22, 2004 7:54:03 PM]
/*STRAIGHT FROM GL.H*/
[Mercury Software] [Google!] [ Look I DONT Follow Trends ]
[edited by - DerAngeD on January 22, 2004 9:02:50 PM]
typedef unsigned int GLenum;typedef unsigned char GLboolean;typedef unsigned int GLbitfield;typedef void GLvoid;typedef signed char GLbyte; /* 1-byte signed */typedef short GLshort; /* 2-byte signed */typedef int GLint; /* 4-byte signed */typedef unsigned char GLubyte; /* 1-byte unsigned */typedef unsigned short GLushort; /* 2-byte unsigned */typedef unsigned int GLuint; /* 4-byte unsigned */typedef int GLsizei; /* 4-byte signed */typedef float GLfloat; /* single precision float */typedef float GLclampf; /* single precision float in [0,1] */typedef double GLdouble; /* double precision float */typedef double GLclampd; /* double precision float in [0,1] *//* Boolean values */#define GL_FALSE 0x0#define GL_TRUE 0x1
[Mercury Software] [Google!] [ Look I DONT Follow Trends ]
[edited by - DerAngeD on January 22, 2004 9:02:50 PM]
quote:Original post by DerAnged
/*STRAIGHT FROM GL.H*/
From gl.h on YOUR platform. May not be the same on all implementations on all platforms.
I think "Brother Bob" gad the rigth point, because "typedef" on it''s own(according to the codes above) won''t solve the portability problem
One reason OpenGL has it''s own types is that the OpenGL specification have absolute precision requirements on each type, but the C or C++ specification does not. For example, a GLint is required to be at least 32 bit long, including a sign bit, but int does not have to be 32 bit or longer. That means there must be platform specific typedefs so the GL-types meets the requirements.
It''s not really about cross platform portability, because the OpenGL specification does not quarantee fixed size datatypes, only minimum size.
It''s not really about cross platform portability, because the OpenGL specification does not quarantee fixed size datatypes, only minimum size.
I''m a little confused about this datatype size thing being resolved with typedefs. Maybe someone could help me out.
(Hypothetically)
If i ran a program today that I compiled back in the ''80s when integers were 16-bit, my processor would still see this variable as a 16-bit variable because the size of int was determined at compile time. If I were to recompile the same source today the variable would be interpreted as 32-bit because my newer compiler interprets integers as 32-bits.
So if this is the case for even basic data types, why would it be any different if I renamed them GLint?
(Hypothetically)
If i ran a program today that I compiled back in the ''80s when integers were 16-bit, my processor would still see this variable as a 16-bit variable because the size of int was determined at compile time. If I were to recompile the same source today the variable would be interpreted as 32-bit because my newer compiler interprets integers as 32-bits.
So if this is the case for even basic data types, why would it be any different if I renamed them GLint?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement