Textures display as white boxes

Started by
11 comments, last by AndyEsser 13 years, 7 months ago
First off, I'm new to using OpenGL.
I'm loading my textures from SDL, saving them in a texture class and then displaying them with an inventively named 'display' function. But instead of displaying the images, it displays a white box the size of the image. I'll include all the code I think is relevant.

Initialization:
   if(SDL_Init(SDL_INIT_EVERYTHING) == -1)      throw("Failed to initialize");    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);   if(bWindowed)      m_sdlScreen = SDL_SetVideoMode(nScreenWidth, nScreenHeight, nScreenBPP, SDL_OPENGL | SDL_SWSURFACE | SDL_RESIZABLE);   else m_sdlScreen = SDL_SetVideoMode(nScreenWidth, nScreenHeight, nScreenBPP, SDL_OPENGL | SDL_SWSURFACE | SDL_RESIZABLE | SDL_FULLSCREEN);   if(!m_sdlScreen)      throw("Failed screen creation");         glEnable(GL_TEXTURE_2D);      // Set clear colour   glClearColor(0.5, 0.5, 0.5, 0);    glViewport(0, 0, nScreenWidth, nScreenHeight);   glClear(GL_COLOR_BUFFER_BIT);   // Set projection   glMatrixMode(GL_PROJECTION);    glLoadIdentity();   glOrtho(0, nScreenWidth, nScreenHeight, 0, -1, 1);    // Set model view matrix   glMatrixMode(GL_MODELVIEW);    glLoadIdentity();    if(glGetError() != GL_NO_ERROR){throw("Error initializing OpenGL");}       SDL_WM_SetCaption(chHeader, NULL);


Texture class constructor: it takes a file name, which it uses to load an SDL_Surface which it then converts to a texture (or not, as the case may be).
The Texture class has width, height and a GLuint pointer as members.
strFilename is the string passed, btw.
   SDL_Surface *p_sdlLoadedImage = LoadImage(strFilename);   // Check for errors   if(p_sdlLoadedImage == NULL)      throw("Loaded image is NULL");   if((p_sdlLoadedImage->w & (p_sdlLoadedImage->w - 1)) != 0)      throw("Texture image width is not a power of 2");   if((p_sdlLoadedImage->h & (p_sdlLoadedImage->h - 1)) != 0)      throw("Texture image height is not a power of 2");         SDL_Surface *sdlImage = SDL_CreateRGBSurface(SDL_SWSURFACE, p_sdlLoadedImage->w, p_sdlLoadedImage->h, WINDOW.GetScreenBPP(),   #if SDL_BYTEORDER == SDL_LIL_ENDIAN // OpenGL RGBA masks       0x000000FF,      0x0000FF00,      0x00FF0000,      0xFF000000   #else      0xFF000000,      0x00FF0000,      0x0000FF00,      0x000000FF   #endif      );         m_nWidth = sdlImage->w;   m_nHeight = sdlImage->h;   if(sdlImage == NULL){throw("Error creating surface code: 666");}      // Save the alpha blending attributes   Uint32 uSavedFlags = p_sdlLoadedImage->flags&(SDL_SRCALPHA|SDL_RLEACCELOK);   Uint8 uSavedAlpha = p_sdlLoadedImage->format->alpha;       if((uSavedFlags & SDL_SRCALPHA) == SDL_SRCALPHA){SDL_SetAlpha(p_sdlLoadedImage, 0, 0);}    // Copy the surface into the GL texture image   SDL_Rect sdlOffset;   sdlOffset.x = 0;   sdlOffset.y = 0;   SDL_BlitSurface(p_sdlLoadedImage, NULL , sdlImage, &sdlOffset);         // Restore the alpha blending attributes   if((uSavedFlags & SDL_SRCALPHA) == SDL_SRCALPHA){SDL_SetAlpha(p_sdlLoadedImage, uSavedFlags, uSavedAlpha);}   GLuint *glTexture;   //Create an OpenGL texture for the image   glBindTexture(GL_TEXTURE_2D, *glTexture);   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, p_sdlLoadedImage->w, p_sdlLoadedImage->h,   0, GL_RGBA, GL_UNSIGNED_BYTE, sdlImage->pixels);      SDL_FreeSurface(sdlImage);   SDL_FreeSurface(p_sdlLoadedImage);      mp_glTexture = glTexture;


Then in my main class' constructor there's this:
   for(int aaa = 0; aaa < nNumFiles; aaa++)   {      mp_vsdlResources.push_back(new Texture(astrFiles[aaa]));           }


The image display function:
void DisplayTexture(const int &nX, const int &nY, Texture *p_sdlSource, SDL_Rect *p_sdlClip){   bool bClip = true;   if(p_sdlClip->x < 0 && p_sdlClip->y < 0) // Show whole thing      bClip = false;   //Bind the texture to which subsequent calls refer to   glBindTexture(GL_TEXTURE_2D, *p_sdlSource->GetTexture());    glBegin(GL_QUADS);	   //Top-left vertex (corner)   if(bClip)      glTexCoord2f(p_sdlClip->x / p_sdlSource->GetWidth(), p_sdlClip->y / p_sdlSource->GetHeight());   else glTexCoord2i(0, 0);   glVertex3f(nX, nY, 0);    	   //Top-right vertex (corner)   if(bClip)      glTexCoord2f((p_sdlClip->x + p_sdlClip->w) / p_sdlSource->GetWidth(), p_sdlClip->y / p_sdlSource->GetHeight());   else glTexCoord2i(1, 0);   glVertex3f(nX + p_sdlClip->w, nY, 0);	   //Bottom-right vertex (corner)   if(bClip)      glTexCoord2f((p_sdlClip->x + p_sdlClip->w) / p_sdlSource->GetWidth(), (p_sdlClip->y + p_sdlClip->h) / p_sdlSource->GetHeight());   else glTexCoord2i(1, 0);   glVertex3f(nX + p_sdlClip->w, nY + p_sdlClip->h, 0);	   //Bottom-left vertex (corner)   if(bClip)      glTexCoord2f(p_sdlClip->x / p_sdlSource->GetWidth(), (p_sdlClip->y + p_sdlClip->h) / p_sdlSource->GetHeight());   else glTexCoord2i(1, 0);   glVertex3f(nX, nY + p_sdlClip->h, 0);      glEnd();     }
Advertisement
You mention constructors some times; are you sure the textures are loaded after creating the rendering context and not before? No OpenGL-command can be called before the rendering context is created, which I assume is done in SDL_SetVideoMode.
Yeah, the initialization is done in a global class, so it should be created before anything else.

What hardware are you using? Specifically what graphics card, and driver. Some integrated chipsets (such as those found on low-end workstations and laptops) don't support textures that don't have widths and heights that are Non-Power of Two (NPOT).

What size is the texture you're trying to use?
I'm not using glGenTexture() either, because I heard you can manage of the textures yourself and that made more sense for me. Is this right?
Why? Why would you possibly want to go through the hassle of generating unique id's and keeping track of them, when a simple call to glGenTextures() does all that for you?

Use the tools you are provided with. glGenTextures() is not something to be avoided.
I was under the impression glGenTexture generated handles for the textures... It genereates unique IDs? Is it the IDs you are supposed to use when calling glBindTexture() or glTexImage2D(), then?
Here is some code below that should generate a texture, take away from it what you will. I've not used SDL, but that just provides the data, the OpenGL part is the same.

GLuint textureID = 0;textureID = glGenTextures();glBindTexture(GL_TEXTURE_2D, textureID);glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);	glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, someData);


Note that width, height, and someData need to be subsituted for their SDL counterparts.

This will take 24-bit RGB data and turn it into a texture.

What you then want to do when you're rendering is do

glBindTexture(GL_TEXTURE_2D, textureID);// Draw Primitive

Hope this helps. Also please note, that this is code I've taken off the top of my head, please excuse any typo's or small anomalies, but the theory is there and should help you out.
OK, I think I've got it working now. Thanks a lot for the help :)

I was thinking glGenTexture generated a handle for the texture (not an ID) and that you used the handle in functions like glBindTexture(), when you are supposed to use the ID instead. Why I'd think and unsigned int represented an actual texture I don't know, but I did...

Thanks again :)
Well the ID is effectively a handle. Once you've created the texture, and passed the data to OpenGL, the only thing you will need to refer to is the ID that glGenTextures() returns.

This topic is closed to new replies.

Advertisement