• Advertisement
Sign in to follow this  

OpenGL glTexImage2D causing a SEGFAULT

This topic is 2767 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all. I have been working with OpenGL for many years, but I have never had a problem this strange. Can anyone spot what is causing the SEGFAULT in my code below? Note: If I scale my malloc to allocate twice as much memory the SEGFAULT goes away... Note also that I am wrapping OpenGL because my propram can use a number of different "contexts". Another note: All my textures in my font ARE NOT POT textures (they all have dimensions that are non-power-of-two), however, I checked and my video card does report GL_ARB_texture_non_power_of_two.



nstd::sint32 font_face::load_glyph(font_glyph &glyph)
{
nstd::uint32 iTexI;
printf("Gen textures\n");
context::gen_textures(1, &iTexI);
printf("Bind texture\n");
glyph.user_index = iTexI;
context::bind_texture(NGUI_TEXTURE_2D, iTexI);

context::tex_parameterf(NGUI_TEXTURE_2D, NGUI_TEXTURE_WRAP_S, NGUI_CLAMP);
context::tex_parameterf(NGUI_TEXTURE_2D, NGUI_TEXTURE_WRAP_T, NGUI_CLAMP);

context::tex_parameteri(NGUI_TEXTURE_2D, NGUI_TEXTURE_MIN_FILTER, NGUI_LINEAR_MIPMAP_NEAREST);
context::tex_parameteri(NGUI_TEXTURE_2D, NGUI_TEXTURE_MAG_FILTER, NGUI_LINEAR);

nstd::uint8 *pixel_data;
printf("Malloc glyph: %d\n", glyph.image->format.pitch);
pixel_data = (nstd::uint8 *)malloc((glyph.image->w * glyph.image->h * glyph.image->format.pitch) * 2);
if (pixel_data == NULL)
context::tex_image(NGUI_TEXTURE_2D, 0, NGUI_RGBA, (unsigned int)glyph.image->w, (unsigned int)glyph.image->h, 0, NGUI_LUMINANCE, NGUI_UNSIGNED_BYTE, glyph.image->pixels);
else
{
printf("Assign\n");
for (size_t l = 0 ; l < (size_t)(glyph.image->w * glyph.image->h * glyph.image->format.pitch) ; l++)
memset(&pixel_data[l * 2], glyph.image->pixels[l * glyph.image->format.pitch], 2);

printf("Bind %d, %d, %u\n", (unsigned int)glyph.image->w, (unsigned int)glyph.image->h, pixel_data);
context::tex_image(NGUI_TEXTURE_2D, 0, NGUI_RGBA, (unsigned int)glyph.image->w, (unsigned int)glyph.image->h, 0, NGUI_LUMINANCE_ALPHA, NGUI_UNSIGNED_BYTE, pixel_data);

printf("Free\n");
::free(pixel_data);
}

return 0;
}



I am dying here! Thanks for the help!

Share this post


Link to post
Share on other sites
Advertisement
What is glyph.image->format.pitch?
Since the segfault disappears when you allocate more memory the cause must be that you're not allocating enough memory in the first place.
Since you are using RGBA as internal format you are telling opengl to use 4 bytes per pixel. So unless that pitch is 4, it'll crash. (I'm assuming that "* 2" is your allocate twice as much memory test)


Share this post


Link to post
Share on other sites
Can you tell us which of the 2 TexImage2D calls you're getting the segfault from? Also inspect the contents of variables and members in the debugger just before the TexImage2D call and make sure that there's not crazy values in there - the kind of thing that can happen if you've a bad pointer elsewhere that's clobbering data on you. If everything else seems OK, maybe step through font_face::load_glyph in the debugger line by line, examining local variables and member variables at each step, and making sure that everything is as you expect it to be.

The most common causes of crashes during texture uploads would be either not enough memory allocated, or the pointer used is somehow invalid, so that's something else to check.

Share this post


Link to post
Share on other sites
Sorry, I forgot to mention about the pitch. The pitch is the image's "number of bytes per pixel", in this case the font "glyph" is a 1 byte per pixel image (just gray). I thought that OpenGL did it's own internal allocation and that it would allocate memory based on the internal format, but that it would expand or compress the data as needed to fit that internal format. You see, I believe GL_LUMINACE_ALPHA is a two part format, and I am using unsigned byte for the data, so that makes the needed space (W * H * 2) correct? (You are right about the pitch, I shouldn't have that in there, but it is 1). I have already ran the code through the debugger and the memory pointer is fine. The stack fails somewhere deep inside OpenGL.

So you are saying that OpenGL will run through MY buffer based on the internal RGBA format, and not my format of GL_LUMINANCE_ALPHA @ GL_UNSIGNED_BYTE?

Share this post


Link to post
Share on other sites
Okay, I got it working... kinda...

I tried scaling my image data so that both internal format and input format are RGBA. This resolved my SEGFAULT, but now I have an even stranger issue. All my textures are white. But here is the funny thing:

1. All of them are powers of two size-wise (256x128, 512x512, etc...)
2. My program works fine if I use gluBuild2DMipMaps
3. Mipmapping IS DISABLED!

What am I doing wrong?


nstd::sint32 font_face::load_glyph(font_glyph &glyph)
{
context::enable(NGUI_TEXTURE_2D);
context::gen_textures(1, &glyph.user_index);
context::bind_texture(NGUI_TEXTURE_2D, glyph.user_index);

context::tex_parameteri(NGUI_TEXTURE_2D, NGUI_TEXTURE_WRAP_S, NGUI_CLAMP);
context::tex_parameteri(NGUI_TEXTURE_2D, NGUI_TEXTURE_WRAP_T, NGUI_CLAMP);

/* Disable mipmapping */
context::tex_parameteri(NGUI_TEXTURE_2D, NGUI_TEXTURE_MIN_FILTER, NGUI_NEAREST);
context::tex_parameteri(NGUI_TEXTURE_2D, NGUI_TEXTURE_MAG_FILTER, NGUI_NEAREST);

nstd::uint16 new_width = power_of_two(glyph.image->w), new_height = power_of_two(glyph.image->h);
nstd::uint8 *pixel_data = glyph.scale_glyph(new_width, new_height, 4);

if (pixel_data == NULL)
{
context::tex_image(NGUI_TEXTURE_2D, 0, NGUI_RGBA, (unsigned int)glyph.image->w, (unsigned int)glyph.image->h, 0, NGUI_LUMINANCE, NGUI_UNSIGNED_BYTE, glyph.image->pixels);
}
else
{
/* This is the function being called */
glTexImage2D(NGUI_TEXTURE_2D, 0, NGUI_RGBA, (unsigned int)new_width, (unsigned int)new_height, 0, NGUI_RGBA, NGUI_UNSIGNED_BYTE, pixel_data);
/* Works if I use this */
gluBuild2DMipmaps(NGUI_TEXTURE_2D, NGUI_RGBA, (unsigned int)new_width, (unsigned int)new_height, NGUI_RGBA, NGUI_UNSIGNED_BYTE, pixel_data);
::free(pixel_data);
}

return 0;
}


By the way, I also checked glGetError() after calling glTexImage2D and it is returning GL_NO_ERROR. What am I doing wrong?

Share this post


Link to post
Share on other sites
A white texture normally indicates an invalid texture object. gluBuild2DMipmaps does a LOT of things internally besides just building a mipmap chain; it checks against the max texture size allowed by your hardware, resizes to powers of 2, sets pixel store and transfer parameters, and so on. These are all potential items for you to check, especially that you're not exceeding GL_MAX_TEXTURE_SIZE.

I read somewhere that glGetError is not totally reliable for texture creation. Maybe try creating a proxy texture instead?

Share this post


Link to post
Share on other sites
I have already check all the things you mentioned. I have checked:

1) Mapmapping is off.
2) Texture size well within card limits
3) All textures are power of two textures
4) I even tried power of two square textures... no effect
5) I have visually inspected my data... it is good

One thing I haven't checked is SDL functions. Aren't there SDL functions to modify the OpenGL context? Maybe one or more of those need calling...

Share this post


Link to post
Share on other sites
Please help me... I have tried everything I know of including the pixel pack alignment, texture modulate, calling order of functions, etc... What am I doing wrong? Why is my texture all white when I call glTexImage2D but it works fine when I call gluBuild2DMipmaps??? Please help!

Share this post


Link to post
Share on other sites
Okay... I figured it out and it is rather embarrassing... My wrapper functions for glTexParameter* where never calling the corresponding GL functions... works great now! :D

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement