Jump to content
  • Advertisement
Sign in to follow this  
Syerjchep

OpenGL OpenGL program doesn't start on other peoples computers.

This topic is 2035 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So even though I can run my program nicely, it seems few others can.

Here's some init code, with irrelevant stuff snipped out between lines:

if( !glfwInit() )
log("Failed to initialize GLFW\n");
 
Code for prefs and such goes here.
 

glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 4);
if(oldopengl)
    {
        glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
        glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
    }
    else
    {
        glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 4);
        glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 0);
    }
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
 
    if(fullscreen)
    {
        // Open a window and create its OpenGL context
        if( !glfwOpenWindow( screenx,screeny, 0,0,0,0, 32,0, GLFW_FULLSCREEN ) )
        {
            log("Failed to open GLFW window. If you have an older (Intel) GPU, they are not 4.0 compatible.\n");
            glfwTerminate();
        }
    }
    else
    {
        // Open a window and create its OpenGL context
        if( !glfwOpenWindow( screenx,screeny, 0,0,0,0, 32,0, GLFW_WINDOW ) )
        {
            log("Failed to open GLFW window. If you have an older (Intel) GPU, they are not 4.0 compatible.\n");
            glfwTerminate();
        }
    }
 
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK)
log("Failed to initialize GLEW\n");

Excuse the lack of formatting, I can't add tabs easily here.

Also, I actually took out a lot of other code in between lines, just leaving the OpenGL init code.

 

I can run it fine. I have an older but higher tier desktop with Windows 7, 6 core processor, 8gb ram and a GTX 560 that can run it well.

I also have a Windows 8 laptop with Integrated Graphics that can run it as well, albeit with lower settings to maintain framerates.

 

Yet it seems that a majority of the people I give the application to simply can't run it. As in it crashes imminently.

Log files often come back like this:

0   : Program started.
0.00   : 4 quotes read.
0.00   : Reading prefs file.
0.05   : Failed to open GLFW window. If you have an older (Intel) GPU, they are not 4.0 compatible.

0   : Failed to initialize GLEW

0   : OpenAL: No Error
0   : Networking initalized.
0   : Initalized.
0   : Error: Missing GL version
0   : Using glew version: 1.7.0
0   : 
0   : 
0   : 
0   : 
0   : Opengl version used: 13107512.2007382186

In cases like the above, OpenGL (specifically GLFW in this case) doesn't even load.

Other times the logs look like this:

0   : Program started.
0.00   : 4 quotes read.
0.00   : Reading prefs file.
0.38   : OpenAL: No Error
0.38   : Networking initalized.
0.38   : Initalized.
0.38   : Using glew version: 1.7.0
0.38   : AMD Radeon HD 6800 Series
0.38   : ATI Technologies Inc.
0.39   : 3.2.12618 Core Profile Context 13.251.0.0
0.39   : 4.30
0.39   : Opengl version used: 3.2
0.39   : Compiling shader: StandardShadingVertex.glsl

It seems to load okay, but loading the shaders crashes the program. 

(I'll post shader loading code in a second.)

 

 

I realize I haven't provided a ton of information, so I'll awnser questions as needed and find the shader loading code to post.

Lastly, I can offer the program it self if anyone wants to try running it.

Here:

http://syerjchep.org/release.zip

If you can even get to the black screen with white text on it. Then it works for you.

 

Share this post


Link to post
Share on other sites
Advertisement

Your code shows the use of GLFW (the old version btw, GLFW3 is a better choice) and yet the program you included comes with SDL. That seems a bit odd. I can't say why the first example failed. It obviously failed to create a valid context, which is why GLEW didn't initialize. As for the second example, it's likely one or more of your shaders contain GLSL which is invalid. Nvidias GLSL compiler is implemented by way of Cg, and it will allow some code which is only valid in Cg to compile as GLSL. AMD's compiler is much more strict.

Share this post


Link to post
Share on other sites


Nvidias GLSL compiler is implemented by way of Cg
I doubt its the case anymore. Recent GLSL has no Cg equivalent.

Share this post


Link to post
Share on other sites

Your code shows the use of GLFW (the old version btw, GLFW3 is a better choice) and yet the program you included comes with SDL. That seems a bit odd. I can't say why the first example failed. It obviously failed to create a valid context, which is why GLEW didn't initialize. As for the second example, it's likely one or more of your shaders contain GLSL which is invalid. Nvidias GLSL compiler is implemented by way of Cg, and it will allow some code which is only valid in Cg to compile as GLSL. AMD's compiler is much more strict.

SDL_Net is used for the programs networking. It is not used for anything concerning graphics, though I believe I still have to link to all the standard SDL graphical libraries.

As for shaders, yes, they're GLSL. But I kinda thought that was just the term (openGL Shading Language) for shaders used with OpenGL, so IDK what you mean with Cg. The shaders are in the zip with the program obviously, but if requested I could just post them here.

 

 


Nvidias GLSL compiler is implemented by way of Cg
I doubt its the case anymore. Recent GLSL has no Cg equivalent.

 

Once again, even though I know how to program in OpenGL and write GLSL shaders, I don't really know what you're talking about.

 

 

Edit: This is the code I'm currently using to load shaders, it's from a tutorial some of you might recognize:

GLuint LoadShaders(const char * vertex_file_path,const char * fragment_file_path)
{
    // Create the shaders
    GLuint VertexShaderID = glCreateShader(GL_VERTEX_SHADER);
    GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
 
    // Read the Vertex Shader code from the file
    std::string VertexShaderCode;
    std::ifstream VertexShaderStream(vertex_file_path, std::ios::in);
    if(VertexShaderStream.is_open())
    {
        std::string Line = "";
        while(getline(VertexShaderStream, Line))
            VertexShaderCode += "\n" + Line;
        VertexShaderStream.close();
    }
 
    // Read the Fragment Shader code from the file
    std::string FragmentShaderCode;
    std::ifstream FragmentShaderStream(fragment_file_path, std::ios::in);
    if(FragmentShaderStream.is_open()){
        std::string Line = "";
        while(getline(FragmentShaderStream, Line))
            FragmentShaderCode += "\n" + Line;
        FragmentShaderStream.close();
    }
 
    GLint Result = GL_FALSE;
    int InfoLogLength;
 
    // Compile Vertex Shader
    log("Compiling shader: "+string(vertex_file_path));
    char const * VertexSourcePointer = VertexShaderCode.c_str();
    glShaderSource(VertexShaderID, 1, &VertexSourcePointer , NULL);
    glCompileShader(VertexShaderID);
 
    // Check Vertex Shader
    glGetShaderiv(VertexShaderID, GL_COMPILE_STATUS, &Result);
    glGetShaderiv(VertexShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
    std::vector<char> VertexShaderErrorMessage(InfoLogLength);
    glGetShaderInfoLog(VertexShaderID, InfoLogLength, NULL, &VertexShaderErrorMessage[0]);
    log(&VertexShaderErrorMessage[0]);
 
    // Compile Fragment Shader
    log("Compiling shader: "+string(fragment_file_path));
    char const * FragmentSourcePointer = FragmentShaderCode.c_str();
    glShaderSource(FragmentShaderID, 1, &FragmentSourcePointer , NULL);
    glCompileShader(FragmentShaderID);
 
    // Check Fragment Shader
    glGetShaderiv(FragmentShaderID, GL_COMPILE_STATUS, &Result);
    glGetShaderiv(FragmentShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
    std::vector<char> FragmentShaderErrorMessage(InfoLogLength);
    glGetShaderInfoLog(FragmentShaderID, InfoLogLength, NULL, &FragmentShaderErrorMessage[0]);
    log(&FragmentShaderErrorMessage[0]);
 
    // Link the program
    log("Linking program.");
    GLuint ProgramID = glCreateProgram();
    glAttachShader(ProgramID, VertexShaderID);
    glAttachShader(ProgramID, FragmentShaderID);
    glLinkProgram(ProgramID);
 
    // Check the program
    glGetProgramiv(ProgramID, GL_LINK_STATUS, &Result);
    glGetProgramiv(ProgramID, GL_INFO_LOG_LENGTH, &InfoLogLength);
    std::vector<char> ProgramErrorMessage( max(InfoLogLength, int(1)) );
    glGetProgramInfoLog(ProgramID, InfoLogLength, NULL, &ProgramErrorMessage[0]);
    log(&ProgramErrorMessage[0]);
 
    glDeleteShader(VertexShaderID);
    glDeleteShader(FragmentShaderID);
 
    return ProgramID;
}
Edited by Syerjchep

Share this post


Link to post
Share on other sites

 


Nvidias GLSL compiler is implemented by way of Cg
I doubt its the case anymore. Recent GLSL has no Cg equivalent.

 

 

In any case I know for a fact that Nvidia's GLSL compiler will accept some code which does not conform to the Khronos spec. This is one of the pitfalls of developing GLSL shaders on a workstation with a Nvidia graphics card. Luckily there has been a recent development called Glslang, which is an official GLSL reference compiler from Khronos. If glslLangValidator compiles your code without error, and AMD or Nvidia fails, then it's their fault, not yours.

Edited by Chris_F

Share this post


Link to post
Share on other sites

Like Chris_F said, nvidia's compiler is alot more lenient (and has more features), while AMDs is standard compliant.

You need to check for GLSL compilation errors.. These are things you can fix at your friends computer in a heartbeat, because you should be getting the output as you compile each shader (if they fail).

 

I have made an implementation here:

https://github.com/fwsGonzo/library/blob/master/library/opengl/shader.cpp

 

It's probably not perfect, but it will tell you what went wrong.

 

What you are looking for is:

glGetShaderiv(shader_v, GL_COMPILE_STATUS, &status);

and

glGetProgramiv(this->shader, GL_LINK_STATUS, &status);

respectively.

 

Also, don't forget to LIBERALLY check for general opengl errors:

if (OpenGL::checkError())

....
 
It's important to have a general idea (best guess) where things went wrong, since it's unlikely your friends will figure that out for you. :)
 
Also, it might be worth mentioning, that unless you absolutely need 32bit depth testing - you could go for 24d8s, since that is the most common format.
Whether or not it's the fastest I have no idea.

Share this post


Link to post
Share on other sites

If it's available to you, you might also want to log errors from GL_ARB_debug_ouput/GL_KHR_debug.

Share this post


Link to post
Share on other sites
Well I've implemented plenty of glGetError calls but now need to upload the new version of the program and find someone with an AMD processor.

Also, is it bad if my vertex shader is version 330 core but my fragment shader is version 150 and they're run with each other?

Share this post


Link to post
Share on other sites

Works fine for me.  Checking the stderr log I see the following:

0	: Program started.
0.00	: 4 quotes read.
0.00	: Could not find prefs file, creating with defaults.
0.27	: OpenAL: No Error
0.28	: Networking initalized.
0.28	: Initalized.
0.28	: Using glew version: 1.7.0
0.28	: Intel(R) HD Graphics 4000
0.28	: Intel
0.28	: 4.0.0 - Build 9.18.10.3071
0.28	: 4.00 - Build 9.18.10.3071
0.29	: Opengl version used: 4.0
0.29	: Compiling shader: shaders/StandardShadingVertex.glsl
0.33	: No errors.

This was on a laptop with NVIDIA Optimus but note that it chose to use the Intel graphics, not the NV.  That's my first diagnosis and seems consistent with what you report in your first post:

0.38   : AMD Radeon HD 6800 Series
0.38   : ATI Technologies Inc.
0.39   : 3.2.12618 Core Profile Context 13.251.0.0
0.39   : 4.30
0.39   : Opengl version used: 3.2

A Radeon 6800 shouldn't be using GL 3.2 so my bet is that this is likewise a laptop, this time with AMD switchable graphics and a hacked-up driver (e.g see http://leshcatlabs.net/2013/11/11/amd-enables-legacy-switchable-graphics-support/ for a discussion of some of the horrible things you had to do if you wanted to update a driver using this technology).

 

Later on today I can test this on a standalone Radeon and we'll see what the results are.

Share this post


Link to post
Share on other sites

OK, here's AMD:

4.74	: AMD Radeon HD 6450
4.74	: ATI Technologies Inc.
4.74	: 4.0.12430 Core Profile Context 13.152.1.8000
4.74	: 4.30
4.74	: Opengl version used: 4.0
4.80	: Compiling shader: shaders/StandardShadingVertex.glsl
terminate called after throwing an instance of 'std::logic_error'
  what():  basic_string::_S_construct null not valid

This was accompanied by a nice friendly "the application has requested the Runtime to terminate it in an unusual way" error.  Hopefully that helps you a bit more with troubleshooting.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!