Jump to content
  • Advertisement
Sign in to follow this  
redeemer90

OpenGL Problem with mipmapping in OpenGL

This topic is 5032 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everyone, This should be a faily simple problem. Anyways, here goes. I have written my own function to generate the mipmaps. The function looks like this...
DWORD GenerateMipmaps (const DWORD width, const DWORD height, const DWORD num, 
									 const int imgFormat, const GLenum glFormat, const BYTE *lpData)
{
	DWORD		nCount = 1, dwWidth = width >> 1, dwHeight = height >> 1;
	const DWORD		bpp = (IL_RGBA == imgFormat) ? 4 : 3;
	BYTE		*imgTempData = 0;
	BOOL		res = 1;

	imgTempData = (BYTE *)malloc (dwWidth*dwHeight*bpp);
	glTexImage2D(GL_TEXTURE_2D, 0, glFormat, width, height, 0, imgFormat, GL_UNSIGNED_BYTE, lpData);

	while (nCount < num && dwWidth && dwHeight)
	{
		gluScaleImage(imgFormat, width, height, GL_UNSIGNED_BYTE, lpData, dwWidth, dwHeight, GL_UNSIGNED_BYTE, imgTempData);
		glTexImage2D(GL_TEXTURE_2D, nCount, glFormat, dwWidth, dwHeight, 0, imgFormat, GL_UNSIGNED_BYTE, imgTempData);

		dwHeight >>= 1;
		dwWidth >>= 1;
		++nCount;
	}

	if (imgTempData)
	{
		free (imgTempData);
		imgTempData = 0;
	}
	return res;
}
The above function is called as follows // Other tex settings if ((dwFlags & TEX_USE_MIPMAPS) && (nMipMaps > 0)) { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); GenerateMipmaps (width, height, nMipMaps, imgFormat, format, lpData); } else { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, imgFormat, GL_UNSIGNED_BYTE, lpData); } The normal glTexImage2D image works perfectly but when the GenerateMipmaps function is called, I just get white blocks on the screen. I also tried various blending mode options using glTexEnvf, but all of them result in the same problem. Also when I changed the line glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); to glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); the texture started rendering (But not in mipmapped mode obviously)? The textures are powers of 2 so I don't think that is a problem either. Any help will be appreciated. Thanks, - Sid

Share this post


Link to post
Share on other sites
Advertisement
You're falling foul of "texture completeness". Some sections from the OpenGL spec (version 1.5):
Quote:
A mipmap is an ordered set of arrays representing the same image; each array has a resolution lower than the previous one. If the image array of level levelbase, excluding its border, has dimensions 2n ×2m ×2l, then there are max{n, m, l}+1 image arrays in the mipmap.

Quote:
Level-of-detail numbers proceed from levelbase for the original texture array through p = max{n, m, l} + levelbase with each unit increase indicating an array of half the dimensions of the previous one as already described. All arrays from levelbase through q = min{p, levelmax} must be defined, as discussed in section 3.8.10.

Quote:
A texture is said to be complete if all the image arrays and texture parameters required to utilize the texture for texture application is consistently defined. The definition of completeness varies depending on the texture dimensionality. For one-, two-, or three-dimensional textures, a texture is complete if the following conditions all hold true:
• The set of mipmap arrays levelbase through q (where q is defined in the Mipmapping discussion of section 3.8.8) were each specified with the same internal format.
• The border widths of each array are the same.
• The dimensions of the arrays follow the sequence described in the Mipmapping discussion of section 3.8.8.
• levelbase <= levelmax
• Each dimension of the levelbase array is positive

Quote:
If one-, two-, or three-dimensional texturing (but not cube map texturing) is enabled for a texture unit at the time a primitive is rasterized, if TEXTURE MIN FILTER is one that requires a mipmap, and if the texture image bound to the enabled texture target is not complete, then it is as if texture mapping were disabled for that texture unit.


That's a little dense, so I'll try and expand it for you.
  1. If your level 0 texture has dimensions 2^n × 2^m then OpenGL defines a value p, the last texture level, as max(n, m).

  2. OpenGL also defines a value q, which is the miniumum of p and the maximum number of levels of a texture which the gl can handle.

  3. If you want to use mipmapping, then all texture levels from 0 to q must be provided.

  4. If you fail to provide all the texture levels then the gl acts as if texturing were disabled


This simplest solution would probably be to eliminate the num parameter from your GenerateMipmaps function.

Enigma

Share this post


Link to post
Share on other sites
Quote:
Original post by _DarkWIng_
A bit OT but why don't you use GL_SGIS_generate_mipmap to generate mipmaps as it's supported on just about everyting from TNT up..

The typical reason of using your own mipmap creation algorithm is that you can use much more sophisticated filtering schemes than generate_mipmap. The latter is commonly implemented with a simpl bilinear downsampler. In many (most) cases, this is fine. But for some texture types, doing advanced filtering will give better visual results.

Although in the case of the OP, the SGIS extension would just be fine, as he is using the common glu bilinear filtering kernel anyway.

redeemer: yes, your problem is texture pyramid completeness. Ever mipmap hierarchy needs to be provided down to the 1*1 map. When supplying non-square textures, your algorithm will generate a 0 width or length while right shifting, while one side of the texture is still larger than 1. In this case, your loop will exit, but OpenGL is still expecting data. For it to work right, you do something like this:


while( 1 ) {
// Scale and upload current level here...
if( dwHeight == 1 && dwWidth == 1 ) break;
dwHeight = ( dwHeight > 1 ) ? dwHeight / 2 : 1;
dwWidth = ( dwWidth > 1 ) ? dwWidth / 2 : 1;
}

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
The typical reason of using your own mipmap creation algorithm is that you can use much more sophisticated filtering schemes than generate_mipmap.


I totaly understand this. I use manual constructed mipmaps for normalmaps and normalization cubemaps for example. But the reason I mentiond this extension is becouse he was using gluScaleImage that basicly does the same thing.

Share this post


Link to post
Share on other sites
Hello,

Before anything else, I first wanted to thank everyone for overwhelming replies to my question.

The main reason I chose to build my own mipmaps, is so that I have the option of changing the filters at every level. Besides, I would also like to have the flexibility of not being too dependent on the glu lib.

Finally, the problem did turn out be with pyramid completeness and now the mip maps work perfectly.

Just out of curiosity. If the mipmaps need to be completed down to the 1x1 level, why does the
int gluBuild2DMipmaps(GLenum target, GLint components, GLint width, GLint height, GLenum format, GLenum type, const void *data)
function take the GLint components argument

- Sid

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!