Archived

This topic is now archived and is closed to further replies.

ehmdjii

sense of GL variable types?

Recommended Posts

ehmdjii    238
hello, i was looking at lots of source codes for OpenGL programs and was wondering what is the advantage of using GLvoid, GLuint or GLfloat instead of void, uint, or float? and also is there an advantage if i write: glScalef(2.0f,2.0f,2.0f); instead of glScalef(2,2,2); thanks!

Share this post


Link to post
Share on other sites
krez    443
they probably did it that way in case the built-in variable types change (i.e. int is 32-bits today, but it might become 64 bits when 64-bit processors become commonplace), or for systems that don''t have types of the same size.

Share this post


Link to post
Share on other sites
duke    107
Yes they do the typedefs for portability.

As for the second question:

No there is not an advantage to writing glScalef(2.0f, 2.0f, 2.0f) as opposed to glScalef(2,2,2);

This is because the compiler knows the function takes float args, and will realize your integer literal is a float. This assumes your compiler is not TOTALLY retarded.

Personally, I prefer to put glScale(2.0f, 2.0f, 2.0f) but it is not actually an advantage in terms of speed or what not.

Jeff

[edited by - duke on January 22, 2004 7:54:03 PM]

Share this post


Link to post
Share on other sites
Deranged    668
/*STRAIGHT FROM GL.H*/

typedef unsigned int GLenum;
typedef unsigned char GLboolean;
typedef unsigned int GLbitfield;
typedef void GLvoid;
typedef signed char GLbyte; /* 1-byte signed */
typedef short GLshort; /* 2-byte signed */
typedef int GLint; /* 4-byte signed */
typedef unsigned char GLubyte; /* 1-byte unsigned */
typedef unsigned short GLushort; /* 2-byte unsigned */
typedef unsigned int GLuint; /* 4-byte unsigned */
typedef int GLsizei; /* 4-byte signed */
typedef float GLfloat; /* single precision float */
typedef float GLclampf; /* single precision float in [0,1] */
typedef double GLdouble; /* double precision float */
typedef double GLclampd; /* double precision float in [0,1] */


/* Boolean values */
#define GL_FALSE 0x0
#define GL_TRUE 0x1

[Mercury Software] [Google!] [ Look I DONT Follow Trends ]

[edited by - DerAngeD on January 22, 2004 9:02:50 PM]

Share this post


Link to post
Share on other sites
remi    150
I think "Brother Bob" gad the rigth point, because "typedef" on it''s own(according to the codes above) won''t solve the portability problem

Share this post


Link to post
Share on other sites
Brother Bob    10344
One reason OpenGL has it''s own types is that the OpenGL specification have absolute precision requirements on each type, but the C or C++ specification does not. For example, a GLint is required to be at least 32 bit long, including a sign bit, but int does not have to be 32 bit or longer. That means there must be platform specific typedefs so the GL-types meets the requirements.

It''s not really about cross platform portability, because the OpenGL specification does not quarantee fixed size datatypes, only minimum size.

Share this post


Link to post
Share on other sites
Enfekted    122
I''m a little confused about this datatype size thing being resolved with typedefs. Maybe someone could help me out.

(Hypothetically)
If i ran a program today that I compiled back in the ''80s when integers were 16-bit, my processor would still see this variable as a 16-bit variable because the size of int was determined at compile time. If I were to recompile the same source today the variable would be interpreted as 32-bit because my newer compiler interprets integers as 32-bits.

So if this is the case for even basic data types, why would it be any different if I renamed them GLint?

Share this post


Link to post
Share on other sites
krez    443
quote:
Original post by Enfekted
I''m a little confused about this datatype size thing being resolved with typedefs. Maybe someone could help me out.

say you wrote a program on platform A, whose int size is 32 bits. so you just used "int" instead of GLInt. now you want to compile your code on platform B, whose int size is 16 bits. BAM, you are screwed; you should have used GLInt, which would be typedef''ed to a 32-bit type on platform B.

Share this post


Link to post
Share on other sites