OpenGL Type Traits

Published September 30, 2008 by Szymon Gatner, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement
Anyone working with OpenGL knows that its API is *ekhm* not perfect. Its imperfection is especially visible to C++ programmers that (as I do) love typesafety that can't often be assured with C language when flexibility is at stakes. This is why we often see code such as:

void someFunctionTakingArbitraryData(int dataTypeIdentifier, int count, void* dataPointer); Because this function is meant to be used with different data types it takes void* as buffer data type which forces client code to drop precious type info. What does it mean? It means lots of errors at runtime, which is the worst possible outcome. Why? Here is an example: vector buffer; someFunctionTakingArbitraryData(TYPE_FLOAT, buffer.size(), &buffer[0]); everything is fine - for now. Buffer is declared only one line above function call so error is not likely to happen. But this is not real life. More likely buffer is just passed in from a completely different context and we have to remember how it was declared. Time passes, decision is made that buffer should be filled with ints instead or even worse - doubles which most of the time behave the same as floats. Change is made to a buffer declaration. Click. Compilation runs smoothly thanks to STL. Run. Bam! Nothing looks like it should, but why? Because your application will most probably not crash, only unexpected program behavior will tell us that something is wrong. Tracking such bugs is the worst part of coding since they are trivial to fix but hard to find.

But fear no more. The cure is there. Has always been. It's called C++.

OpenGL API is build around functions like this one above. For example:

void glVertexPointer (GLint size, GLenum type, GLsizei stride, const GLvoid *pointer); and a call to it using a buffer declared before would be: glVertexPointer(1, GL_FLOAT, 0, &buffer[0]); wouldn't be nice to just write: vertexPointer(1, 0, buffer); and rely on compiler to do the rest for us? He knows best what type buffer really stores. It can be done and takes only a few lines of code to make this work forever. I called the technique OpenGL Type Traits. Type traits is a common C++ technique used to bind compile time information with another compilation constant (more on that later). It is used widely in STL, BOOST and basically template based libraries. Check the references to learn more.

Typically this "compilation constant" is a type but it doesn't have to be. It can also be a constant number (surprising huh?). So where does this all lead us to? What we need is a way to bind together OpenGL constants and types that we use in a buffer passed in. This is where traits are needed. Lets take a look:

template struct opengl_traits { }; template<> struct opengl_traits { enum {GL_TYPE = GL_UNSIGNED_INT}; }; what you see here is a static binding of an unsigned int type with GL_UNSIGNED_INT constant number. But there is more: template void vertexPointer(GLint size, GLsizei stride, const T* pointer) { glVertexPointer(size, opengl_traits::GL_TYPE, stride, pointer); }; Here we have a glVertexPointer wrapped by a template function. Lets say we would write the following code: vector buffer; vertexPointer(3, 0, &buffer[0]); A compiler will deduce T parameter as "unsigned int", look inside the opengl_traits specialization for a nested GL_TYPE constant and he will find the GL_UNSIGNED_INT value. What will happen if buffer declaration is changed to say int? A compiler error. Nice. To fix an error another specialization is needed and so on. This way whenever we decide to change a buffer content we automatically have a valid type parameter passed. But there is one more thing left to cover: buffer objects. Since OpenGL API says that glVertexPointer and others take a NULL-pointer (which is implicitly of type int) when using VBOs, presented code would not compile. Because buffer content cannot be deduced form the passed in parameter, the compiler has to be instructed to figure this out some other way. For this purpose we need to create a template buffer object class. Additionally it has to contain a nested typedef member with value type definition, for example:

template class BufferObject { public: typedef T value_type; ... etc } We use such a VBO in this way: typedef BufferObject VBO; VBO vbo(numElements); ... vbo.use(); //do some setup and binding vertexPointer(3, 0, 0); Again, a change in VBO definition will automagically result in vertexPointer() code modification. In fact, std::vector also contains a nested typedef "value_type" (as does any other std container) and can be used in the exact same way. Typically vertexPointer() would be placed inside some kind of renderer class like so: class OpenGlRenderer { ... template void vertexPointer() template void normalPointer() ... etc } This technique is not limited to OpenGL of course, but I came up with it while looking for a rendering bug so it is presented in this way. Also even here is a place to improvements. For example in my code I also have a GL_SIZE member in opengl_traits specializations to remove the first glVertexPointer parameter (why this is useful is a readers exercise, just think of vector for example ;) ) Even if this is not something you would use I still hope you liked it. Feel free to contact me at szymon-dot-gatner-at-gmail-dot-com.

References:

http://www.generic-programming.org/languages/cpp/techniques.php#traits

Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement