glBindBuffer zero target valid?

Started by
1 comment, last by LordJulian 11 years, 3 months ago

I usually bind the zero target after doing some operation to a buffer. The purpose is to catch errors. Is this a reasonable strategy?

Does it have any performance problems?

Running gDEBugger, it tells me that glBindBuffer(XXX, 0) is deprecated:

The function glBindBuffer uses an object that was not generated by OpenGL. This feature was marked as deprecated in OpenGL version 3.0 and was removed from OpenGL at version 3.1

Looking at the OpenGL specification, it looks like 0 is allowed:

The value zero is reserved, but there is no default buffer object for each buffer object target. Instead, buffer? set to zero effectively unbinds any buffer object previously bound, and restores client memory usage for that buffer object target (if supported for that target).

And

While buffer object name zero is bound, as in the initial state, attempts to modify or query state on the target to which it is bound generates an GL_INVALID_OPERATION? error.

[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Advertisement

Seems like a gDEBugger issue with not correctly identify this case.

if you're concerned about performance issues , you could #ifdef that zero-buffer-binding line to only run in debug mode . This way you will have the behavior when running in debug mode, which will allow you to check for issues, and when you're sure you have none, you won't need to unbind buffers anymore.

As for the actual performance delta, ALWAYS MEASURE. Just run a test scene, that will stress that particular binding/unbinding part of your code with both that line and without it and time the difference. As a friend, if they have a different video card, to run the same test and compare the data. Repeat the test lots of times, for consistency. After that, take the decision based on those numbers.

*AsK a friend ...

This topic is closed to new replies.

Advertisement