• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Athe

OpenGL
Segmentation fault when loading texture using SOIL

6 posts in this topic

Hey all, I hope I'm posting this in the right place

 

So I'm "refreshing" my OpenGL knowledge by following a series of tutorials, and have gotten to the texturing part.

For loading textures I wanted to try the SOIL library, which seemed pretty good. However when trying to load a texture like so:

GLuint tex_2d = SOIL_load_OGL_texture("img_test.png",
                SOIL_LOAD_AUTO,
                SOIL_CREATE_NEW_ID,
                SOIL_FLAG_MIPMAPS | SOIL_FLAG_INVERT_Y | SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_COMPRESS_TO_DXT);

I am getting a segmentation fault because SOIL is doing this:

if((NULL == strstr( (char const*)glGetString( GL_EXTENSIONS ),
    "GL_ARB_texture_non_power_of_two" ) ))
{
    /*      not there, flag the failure     */
    has_NPOT_capability = SOIL_CAPABILITY_NONE;
} else
{
    /*      it's there!     */
        has_NPOT_capability = SOIL_CAPABILITY_PRESENT;
}

And it appears as if glGetString(GL_EXTENSIONS) is returning NULL.

Upon searching for help on this, i found that calling glGetString with GL_EXTENSIONS has been deprecated and one should now use glGetStringi instead.

 

So what I did was I updated the SOIL code to use this new way of querying for extensions by implementing this function:


int query_gl_extension( const char *extension)
{
        GLuint num_extensions, i;

        glGetIntegerv(GL_NUM_EXTENSIONS, &num_extensions);
        for(i = 0; i < num_extensions; ++i)
        {
                const GLubyte *ext = glGetStringi(GL_EXTENSIONS, i);
                if(strcmp((const char*)ext, extension) == 0)
                        return 1;
        }

        return 0;
}

But this changed nothing. If I use GDB and step through this code, and try to print the value of the variable ext, I get the following:

 

$2 = (const GLubyte *) 0xfffffffff473d244 <error: Cannot access memory at address 0xfffffffff473d244>
 

The strange part is; If I try to call glGetStringi(GL_EXTENSIONS, 0) from MY code right before loading the texture with SOIL, it works. So that should rule out my GL context not being active, no?

 

What could cause glGetStringi(GL_EXTENSIONS, index) to return what seems to be a pointer to random memory when invoked from SOIL, but not when invoked from my code?

 

My OpenGL version is 3.3 and my OS is Ubuntu 64-bit.

 

Thanks!

0

Share this post


Link to post
Share on other sites

A small update:

If I call glGetStringi(GL_EXTENSIONS, 0) from my project and then call glGetStringi(GL_EXTENSIONS, 0) from within SOIL (by modifying the SOIL source), I find that when calling from MY code, it returns a pointer to a correct string ("GL_AMD_multi_draw_indirect") at 0x00007ffff473d244, and when called from SOIL, it returns a pointer to 0xfffffffff473d244.

 

So it would seem as the second time it returns the pointer, its upper 4 bytes have all been set to 0xFF.

This leads me to believe that somewhere, the pointer returned from glGetStringi is handled as a 32-bit value, and then cast back to a pointer type. But why would what happen?

 

Update #2:

I had missed a compilation warning that slightly confirms my suspicion:

/home/emil/projects/gltest/external/soil/SOIL.c: In function ‘query_gl_extension’:
/home/emil/projects/gltest/external/soil/SOIL.c:1888:24: warning: initialization makes pointer from integer without a cast [enabled by default]
   const GLubyte *ext = glGetStringi(GL_EXTENSIONS, 0);

However, the call to glGetStringi that i make from MY code does not generate this warning. I fail to see why this warning would occur, and google gives me nothing..

Edited by Athe
0

Share this post


Link to post
Share on other sites

Sounds weird. What compiler and which version of it are you using? And just a random guess, but does SOIL.c have the needed includes for glGetStringi?

0

Share this post


Link to post
Share on other sites

Thanks for your reply,

I'm using GCC (my code is compiled as C++, and SOIL is compiled as C) version 4.8.2.

 

SOIL.c includes GL/gl.h which in turn includes GL/glext.h, where the declaration for glGetStringi is found.

0

Share this post


Link to post
Share on other sites

And you're actually defining GL_GLEXT_PROTOTYPES so that the glGetStringi gets declared? I just made a simple test, and compiling it as C++ creates an error.

 

gcc -Wall -g test.c $(pkg-config --libs --cflags sdl2) -lSOIL -lGL -lGLEW && ./a.out:

test.c:10:3: warning: implicit declaration of function ‘glGetStringi’ [-Wimplicit-function-declaration]

g++ -Wall -g test.cpp $(pkg-config --libs --cflags sdl2) -lSOIL -lGL -lGLEW && ./a.out:

test.cpp:10:53: error: ‘glGetStringi’ was not declared in this scope
Edited by Sponji
1

Share this post


Link to post
Share on other sites

I am not defining GL_GLEXT_PROTOTYPES myself, but I assume that GLFW/GLEW or some other header file that I'm using defines it, because I do not get that compilation error.

0

Share this post


Link to post
Share on other sites

Ok...so now I just tried creating a small test program in C like this:

#include <GLFW/glfw3.h>
#include <stdio.h>

int main(void)
{
        GLFWwindow *window;
        const GLubyte *ext;

        if(!glfwInit()) {
                fprintf(stderr, "Failed to initialize GLFW\n");
                return -1;
        }

        glfwWindowHint(GLFW_SAMPLES, 4); // 4x antialiasing
        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // OpenGL 3.3
        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
        glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Apparently needed "to make OS X happy"
        glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

        window = glfwCreateWindow(1024, 768, "Tutorial 01", NULL, NULL);
        if(window == NULL) {
                fprintf(stderr, "Failed to open GLFW window.");
                glfwTerminate();
                return -1;
        }

        glfwMakeContextCurrent(window);

        ext = glGetStringi(GL_EXTENSIONS, 0);
        printf("%s\n", (const char*)ext);
        return 0;
}

And it failed just like previously. The returned pointer from glGetStringi is not correct, the upper 4 bytes are all 0xFF, just as if it is returning a 32-bit pointer to my 64-bit program and the 0xFF are just default values for the extra bytes.

 

However, if I do what you said and define GL_GLEXT_PROTOTYPES, it works! That seems really weird to me, but I'm glad it solves the problem!

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now