another glGenerateMipmap question

Started by
5 comments, last by ndhb 15 years, 11 months ago
I am trying to implement a simple example of using glGenerateMipmapEXT and am failing miserably and am not sure why. Graphics card: Nvidia Geforce8800 GTX, latest drivers In order to test glGenerateMipmapEXT, I'm simply following the example listed in the framebuffer_object spec by creating a texture, rendering to it, and then generating mipmaps. Here's the texture and fbo creation:
[Source]
	glGenTextures(1, &tex);
	glBindTexture(GL_TEXTURE_2D, tex);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16, 16, 16, 0, GL_RGB, GL_FLOAT, NULL);
	glGenerateMipmapEXT(GL_TEXTURE_2D);		
	glBindTexture(GL_TEXTURE_2D, 0);

	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 20);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, tex, 0);
	GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
	cout << "fbo status = " << status << endl; // always OK
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
[/Source]
Now that the fbo and texture are setup, I render to it by drawing a full screen quad with a fixed color. (glColor3f(.3, .4, .5);) After rendering, I do:
[Source]
        glBindTexture(GL_TEXTURE_2D, tex);
	glGenerateMipmapEXT(GL_TEXTURE_2D);
[/Source]
followed by a series of calls to glGetTexImage() to inspect the mipmaps. What I get is garbage output in all levels except level=0 and the final level. I somehow doubt this is a driver bug because it is such a simple example. Any suggestions? Thanks. [Edited by - execute42 on May 27, 2008 1:38:44 PM]
Advertisement
YOu need to create each level for the mipmap, manually if you use the extension.
Quote:YOu need to create each level for the mipmap, manually if you use the extension.


I don't think that is correct, at least according to the spec.
Look at the FBO spec and scroll down to "Usage Example 4." You will see that the first call to glGenerateMipmapEXT() immediately after the base level allocation established the mipmap levels. In principle, after the texels at the base level are specified, then a second call to glGenerateMipmapEXT() actually calculates the mipmaps in hardware up to the 1x1 mipmap or the max level as specified by glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, max_level);

After many hours of playing around today, I do believe I have discovered a bug in NVidia's latest drivers (and older drivers too...) I found that glGetTexImage() is unreliable at fetching mipmaps generated with glGenerateMipmapEXT(). I found that explicitly attaching a particular texture level to a FBO attachment point and calling glReadPixels() returned non-garbage values.

I also found that you have to be careful when trying to use glGenerateMipmapEXT() with a texture that is not created by being rendered too. For example, the following gave me NaN's and divide by zero's in the mipmap levels:
   // create a texture    glGenTextures(1, &tex);    glBindTexture(GL_TEXTURE_2D, tex);    float *data = new float[256];    for (i = 0; i < 256; ++i) data = i;    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, data);    glGenerateMipmapEXT(GL_TEXTURE_2D);


and the following also failed:
   // create a texture    glGenTextures(1, &tex);    glBindTexture(GL_TEXTURE_2D, tex);    float *data = new float[256];    for (i = 0; i < 256; ++i) data = i;    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, NULL);    glGenerateMipmapEXT(GL_TEXTURE_2D);    glBindTexture(GL_TEXTURE_2D, 0);    // later in the program    glBindTexture(GL_TEXTURE_2D, tex);    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 16, 16, GL_LUMINANCE, GL_FLOAT, data);    glGenerateMipmapEXT(GL_TEXTURE_2D);   // read texture level=1 and observe problems


However...
If I create a "dummy" fbo and attachment my texture to it, all is well.
   // create a texture    glGenTextures(1, &tex);    glBindTexture(GL_TEXTURE_2D, tex);    float *data = new float[256];    for (i = 0; i < 256; ++i) data = i;    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, NULL);    glGenerateMipmapEXT(GL_TEXTURE_2D);    glBindTexture(GL_TEXTURE_2D, 0);    // create a dummy FBO that I won't actually use...    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 666);    glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, tex, 0);    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);    glBindTexture(GL_TEXTURE_2D, tex);    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 16, 16, GL_LUMINANCE, GL_FLOAT, data);    glGenerateMipmapEXT(GL_TEXTURE_2D);   // read texture level=1 and observe the correct values


This is very strange, but it does seem to work. Instead of using glTexSubImage, one can populate the texels of the texture by just drawing into the FBO like normal. The texels look ok if you then use glReadPixels() and not glGetTexImage(). Driver bug?
Hmm. I think I experienced the same problem some weeks ago. I was getting garbage while retrieving the texels from the mipmap levels with glGetTexImage. It wasn't a driver bug as such, just an "undocumented feature". In the driver version I use, glGenerateMipmapEXT respects the Hint to glGenerateMipmap (which defaults to GL_FASTEST). I was able to retrieve the correct texels from each mipmap level once I changed the glHint to GL_NICEST - GL11.glHint(GL14.GL_GENERATE_MIPMAP_HINT, GL11.GL_NICEST).

I found that the TexParameter, GL14.GL_GENERATE_MIPMAP (so called automatic mipmap generation, in the spec) ignores the hint to GL14.GL_GENERATE_MIPMAP_HINT. The garbage texels only occured when the mipmap chain was generated with the glGenerateMipmap function call.

By the way, I understand the specification on glGenerateMipmapEXT the same as you.
"glHint(GL_GENERATE_MIPMAP_HINT, GL_NICEST)"

I didn't even know this existed! ACtually, I don't make use of glHint at all in my programs.

Anyway, like I've told others, don't use glGenerateMipmapsEXT on a standard texture

    // create a texture    glGenTextures(1, &tex);    glBindTexture(GL_TEXTURE_2D, tex);    float *data = new float[256];    for (i = 0; i < 256; ++i) data = i;    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 16, 16, 0, GL_LUMINANCE, GL_FLOAT, data);    glGenerateMipmapEXT(GL_TEXTURE_2D);


because on ATI, it doesn't seem to work. I don't know what hw and what drivers, but there are reports that glGenerateMipmapEXT(GL_TEXTURE_2D) does nothing and the mipmaps are full of wrong texels even when you use the texture to render an object.
If you use glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE), it works fine.

So, in my opinion, glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE) is for standard textures and
glGenerateMipmapEXT(GL_TEXTURE_2D) is for FBO (as long as you call it at the right moment.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:I found that the TexParameter, GL14.GL_GENERATE_MIPMAP (so called automatic mipmap generation, in the spec) ignores the hint to GL14.GL_GENERATE_MIPMAP_HINT. The garbage texels only occured when the mipmap chain was generated with the glGenerateMipmap function call.

I will give this a try.

Quote:So, in my opinion, glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE) is for standard textures and
glGenerateMipmapEXT(GL_TEXTURE_2D) is for FBO (as long as you call it at the right moment.

In principle, I agree with this. GL_GENERATE_MIPMAP=true does produce correct mipmaps when the base level texture is updated. However, when turning this feature on, I noticed a huge performance hit. I have a piece of software that displays frames from a video stream (1280x1024) and with the texture upload time, runs at about 20-25 Hz. After enable auto-mipmap generation, texture upload slowed so I could only run at 2-3 Hz. The likely explanation for this is that Nvidia is filtering the images in software. But by using glGenerateMipmapEXT() the mipmapping seems to be fully hardware accelerated and the time it takes to calculate them is negligible compared to the upload time of the base image.
glGenerateMipmapEXT was fast for me too (faster than the TexParameter)... until I changed the hint to avoid the garbage texels in some of the levels. Then both took about the same time generate the mipmap chain. I therefore assume automatic mipmap creation (with TexParameter) always generate the full chain and doesn't cut corners like manual creation (glGenerateMipmapEXT) with the default hint.

This topic is closed to new replies.

Advertisement