Quote:YOu need to create each level for the mipmap, manually if you use the extension.
I don't think that is correct, at least according to the spec.
Look at the
FBO spec and scroll down to "Usage Example 4." You will see that the first call to glGenerateMipmapEXT() immediately after the base level allocation established the mipmap levels. In principle, after the texels at the base level are specified, then a second call to glGenerateMipmapEXT() actually calculates the mipmaps in hardware up to the 1x1 mipmap or the max level as specified by glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, max_level);
After many hours of playing around today, I do believe I have discovered a bug in NVidia's latest drivers (and older drivers too...) I found that glGetTexImage() is unreliable at fetching mipmaps generated with glGenerateMipmapEXT(). I found that explicitly attaching a particular texture level to a FBO attachment point and calling glReadPixels() returned non-garbage values.
I also found that you have to be careful when trying to use glGenerateMipmapEXT() with a texture that is not created by being rendered too. For example, the following gave me NaN's and divide by zero's in the mipmap levels:
// create a texture glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); float *data = new float[256]; for (i = 0; i < 256; ++i) data = i; glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, data); glGenerateMipmapEXT(GL_TEXTURE_2D);
and the following also failed:
// create a texture glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); float *data = new float[256]; for (i = 0; i < 256; ++i) data = i; glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, NULL); glGenerateMipmapEXT(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, 0); // later in the program glBindTexture(GL_TEXTURE_2D, tex); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 16, 16, GL_LUMINANCE, GL_FLOAT, data); glGenerateMipmapEXT(GL_TEXTURE_2D); // read texture level=1 and observe problems
However...
If I create a "dummy" fbo and attachment my texture to it, all is well.
// create a texture glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); float *data = new float[256]; for (i = 0; i < 256; ++i) data = i; glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, NULL); glGenerateMipmapEXT(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, 0); // create a dummy FBO that I won't actually use... glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 666); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, tex, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glBindTexture(GL_TEXTURE_2D, tex); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 16, 16, GL_LUMINANCE, GL_FLOAT, data); glGenerateMipmapEXT(GL_TEXTURE_2D); // read texture level=1 and observe the correct values
This is very strange, but it does seem to work. Instead of using glTexSubImage, one can populate the texels of the texture by just drawing into the FBO like normal. The texels look ok if you then use glReadPixels() and not glGetTexImage(). Driver bug?