Jump to content
  • Advertisement
Sign in to follow this  
Geometrian

OpenGL Loading Textures

This topic is 3067 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I am currently porting some of my Python OpenGL work to C++. Right now, I'm having problems with loading 2D textures. I am using SDL_image.

The loading code is quite complicated, but for debugging, I've simplified the glTexImage(...) call (by substituting the values of variables with constants) to:
glTexImage2D(GL_TEXTURE_2D,0,3,data_surf2->w,data_surf2->h,0,GL_RGB,GL_UNSIGNED_BYTE,data_surf2->pixels);

For testing purposes, I'm using a stock image: http://mooigoed.files.wordpress.com/2009/11/cheese_oh_cheese1.jpg. The visual result in my test program:


Naturally, I tried various values for glPixelStorei(GL_UNPACK_ALIGNMENT,___), but the result is always incorrect.

What to do?
Thanks,
-G

Share this post


Link to post
Share on other sites
Advertisement
Update:

It appears that the third parameter "3" is incorrect. The surface returned by:
IMG_Load(...)
does, in fact, have three bytes per pixel, but I then convert this with:
SDL_DisplayFormat(...)
. The resultant surface then has four bytes per pixel. This is confusing to me because it ought not to be adding an alpha channel--that's what:
SDL_DisplayFormatAlpha(...)
is for!

Share this post


Link to post
Share on other sites
If you're using a display format with an alpha channel, then SDL_DisplayFormat should give you an alpha channel. The point of the function is to give you a surface formatted for the fastest possible blit to the display. SDL_DisplayFormatAlpha will give you an alpha channel whether or not your display format has one.

Share this post


Link to post
Share on other sites
That seemed to be the problem. After much deliberation, generating data_surf2 looks like:
	data_surf = IMG_Load(data_path.c_str());
if (!data_surf) {
cout << "Error: image could not be loaded; "+*SDL_GetError() << endl;
}
SDL_Rect src_rect,dst_rect;
src_rect.x = 0; src_rect.y = 0; src_rect.w = data_surf->w; src_rect.h = data_surf->h;
dst_rect.x = 0; dst_rect.y = 0; dst_rect.w = data_surf->w; dst_rect.h = data_surf->h;
Uint8 saved_flags = data_surf->flags;
Uint8 saved_alpha = data_surf->format->alpha;
Uint32 saved_colorkey;
if (uses_colorkey) { saved_colorkey = data_surf->format->colorkey; }
if (format==GLLIB_RGB) {
data_surf2 = SDL_CreateRGBSurface(SDL_SWSURFACE,data_surf->w,data_surf->h,24,
#if SDL_BYTEORDER == SDL_LIL_ENDIAN
0x000000FF,
0x0000FF00,
0x00FF0000,
0x00000000
#else
0xFF000000,
0x00FF0000,
0x0000FF00,
0x00000000
#endif
);
}
else if (format==GLLIB_RGBA) {
data_surf2 = SDL_CreateRGBSurface(SDL_SWSURFACE,data_surf->w,data_surf->h,32,
#if SDL_BYTEORDER == SDL_LIL_ENDIAN
0x000000FF,
0x0000FF00,
0x00FF0000,
0xFF000000
#else
0xFF000000,
0x00FF0000,
0x0000FF00,
0x000000FF
#endif
);
SDL_SetAlpha(data_surf, 0, data_surf->format->alpha);
if (uses_colorkey) { SDL_SetColorKey(data_surf,SDL_SRCCOLORKEY,SDL_MapRGB(data_surf->format,colorkey[0],colorkey[1],colorkey[2])); }
}
for (int blit_y=0;blit_y<data_surf->h;blit_y++) {
src_rect.y = blit_y;
src_rect.h = 1;
dst_rect.y = data_surf->h - blit_y - 1;
dst_rect.h = 1;
SDL_BlitSurface(data_surf,&src_rect,data_surf2,&dst_rect);
}
if (format==GLLIB_RGBA) {
SDL_SetAlpha(data_surf2,SDL_SRCALPHA,data_surf2->format->alpha);
SDL_SetAlpha(data_surf,saved_flags,saved_alpha);
if (uses_colorkey) { data_surf->format->colorkey = saved_colorkey; }
}
Thanks,
G

Share this post


Link to post
Share on other sites
You should definitely be using GL_RGBA instead of 3 as your internalformat, and if possible have your source data in GL_BGRA format:

http://www.opengl.org/wiki/Common_Mistakes#Texture_upload_and_pixel_reads
Quote:
And if you are interested, most GPUs like chunks of 4 bytes. In other words, RGBA or BGRA is prefered. RGB and BGR is considered bizarre since most GPUs, most CPUs and any other kind of chip don't handle 24 bits. This means, the driver converts your RGB or BGR to what the GPU prefers, which typically is BGRA.


Also note: http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures
Quote:
Notice that the format is GL_BGRA. As explain before, most GPUs prefer the BGRA format; using RGB, BGR and RGBA results in lower performance.

Share this post


Link to post
Share on other sites
Quote:
You should definitely be using GL_RGBA instead of 3 as your internalformat
Ah yes; that was just a remnant from testing.
Quote:
Notice that the format is GL_BGRA. As explain before, most GPUs prefer the BGRA format; using RGB, BGR and RGBA results in lower performance.
Wow; I didn't know that! Thanks,

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!