Problem loading a particular .png file

Started by
7 comments, last by Monkey_Missile 10 years, 11 months ago

I've been using SDL with openGL to create a project and for some reason using IMG_Load followed by SDL_DisplayFormatAlpha causes a crash only when I use the attached file. I'm doing this in Code::Blocks so I'm not sure if the file is just too big or what. Any guesses as to what might be doing this and how to resolve it?

[attachment=14870:background.png]

Advertisement

What kind of GPU have you?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Never used SDL but that sounds like a loading bug, possibly a buffer overflow that doesn’t get caught until later.

A stack trace and any reported errors would be helpful.

Things to try in the meantime:

#1: Check the format of the .PNG (bits per pixel) and save it as 32 bits per pixel if it is not already.

#2: Modify the image to see which part of it is related to the crash.

#3: Save it as a .BMP or .TGA and see if it still happens.

These of course are only running away from the problem but would at least give you some idea as to where it could be.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

My thought is - because it's a 1024x800 image - the OP may have an older GPU that doesn't support non-power-of-two textures. Now, OpenGL shouldn't cause a crash on an attempt to load that (error behaviour being well-specified) but that assumes a conformant (and non-buggy) driver, and we all know how many of those exist (hint: none).

Of course, we do need more info from the OP to diagnose this properly.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I'm using an NVIDIA GeForce GTX560 Ti. I've changed the image to a 1024x1024 and still no dice plus other non-power-of-two images still work. Changing it to a .bmp works but only if 32 bit. 16 and 24 bit still crash. Also .png files from what I gather can only be 8 bit but that can't be it since the other smaller 48x48 texture which is a .png works fine.

I was hoping to avoid .bmp files due to their size so I'll still try fixing this before doing that. If this is a problem in the beginning where openGL is being initialized then I'm not sure where to look. The code from the init is as follows:


    //Initialize SDL
    if(SDL_Init(SDL_INIT_EVERYTHING) < 0){
        return false;
    }

    //Set OpenGL memory usage
    SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
    SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
    SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);

    //Set window caption
    SDL_WM_SetCaption("Test", NULL);

    //Size of the window
    if(SDL_SetVideoMode(SCREEN_WIDTH,SCREEN_HEIGHT,SCREEN_BPP,SDL_OPENGL) == NULL){
        return false;
    }

    //Specify the clear color and what portion of the screen will display
    glClearColor(0,0,0,1);
    glViewport(0,0,SCREEN_WIDTH,SCREEN_HEIGHT);

    //Shader model
    glShadeModel(GL_SMOOTH);

    //2D rendering
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();

    //Depth testing only used with 3D rendering
    glDisable(GL_DEPTH_TEST);

    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

How are you loading the PNG? Also, are you making sure the PNG is being converted to 32-bit RGBA? If you're using libpng directly and you're using the "high level" loading functions, you can pass some flags to get it to convert to that format for you.

What format is this PNG, by the way?

PS: PNG supports a ton of formats (between paletted, grayscale and RGB along with alpha or not alpha and 8-bit or 16-bit components - or even 4-bit in one case). Be careful. Luckily you can still turn it into 32-bit RGBA only so don't bother much about that if you tell libpng to convert it on its own.

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

Ugh sorry about the delay there. Here's the image load function, which part would allow me to convert the format?


GLuint mainApp::OnLoad(const std::string &fileName){
    SDL_Surface *image = IMG_Load(fileName.c_str());

    SDL_DisplayFormatAlpha(image);
    unsigned object(0);

    glGenTextures(1, &object);
    glBindTexture(GL_TEXTURE_2D, object);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);

    SDL_FreeSurface(image);
    return object;
}

Your image doesn't have alpha and you're using it as it would have it.


GLenum format;

if(image->format->BytesPerPixel == 3) format = GL_RGB;
else if(image->format->BytesPerPixel == 4) format = GL_RGBA;

glTexImage2D(GL_TEXTURE_2D, 0, format, image->w, image->h, 0, format, GL_UNSIGNED_BYTE, image->pixels); 

The SDL_Surface's format could be something else too, like BGR, but this is just an example. Google for SDL_Surface's manual and glTexImage2D and you should find some good information about the formats.

Derp

Great feedback! Thanks I will use this.

This topic is closed to new replies.

Advertisement