Jump to content
  • Advertisement
Sign in to follow this  
taby

OpenGL Works on AMD GPU in Windows, doesn't work on Intel GPU in Mac OS X

This topic is 1037 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

The attached code is supposed to draw a mesh using OpenGL 4.1 on Mac OS X, but it doesn't. It only displays the background.

 

The same code runs fine using OpenGL 4.4 on Windows (view2.zip). It displays the mesh.

 

view.h contains the code to upload the vertices and indices to the GPU. Any ideas what is wrong?

 

If I'm not mistaken, the first code sample in the Red Book doesn't work on Mac OS X either.

Edited by taby

Share this post


Link to post
Share on other sites
Advertisement

In case you have no errors and warnings at shader compilations, post the code of the draw call (its parameters) and also make sure your program explicitly issues indicies type (wheather 16 bit or 32 bit). You may have been relying on some defaults that differ on OSX.

Share this post


Link to post
Share on other sites

The draw code is:

void display_func(void)
{
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    
    glDrawElements(GL_TRIANGLES, triangle_indices.size()*3, GL_UNSIGNED_INT, 0);
 
    glFlush();
}

 

There are no errors with the shader compile and link.

 

I specify the type as GLuint in the last line of this code:

// Transfer vertex data to GPU
glGenVertexArraysAPPLE(1, vaos);
    glBindVertexArrayAPPLE(vaos[0]);
    glGenBuffers(1, &buffers[0]);
    glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
    glBufferData(GL_ARRAY_BUFFER, vertices_with_face_normals.size()*6*sizeof(float), &vertices_with_face_normals[0], GL_STATIC_DRAW);
    
// Set up vertex positions
glVertexAttribPointer(0, 6/2, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);
 
// Set up vertex normals
    glVertexAttribPointer(1, 6/2, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(6/2 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);
 
// Transfer index data to GPU
glGenBuffers(1, &buffers[1]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, triangle_indices.size()*3*sizeof(GLuint), &triangle_indices[0], GL_STATIC_DRAW);

Share this post


Link to post
Share on other sites

I've looked at your code, and nothing jumps out, beyond your use of a long-deprecated APPLE extension version of VAOs.

 

Do yourself a favour, #include <OpenGl/gl3.h>, and use the version in core (or use something like GLEW to manage extensions for you).

Share this post


Link to post
Share on other sites

 

The draw code is:

void display_func(void)
{
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    
    glDrawElements(GL_TRIANGLES, triangle_indices.size()*3, GL_UNSIGNED_INT, 0);
 
    glFlush();
}

There are no errors with the shader compile and link.

 

I specify the type as GLuint in the last line of this code:

// Transfer vertex data to GPU
glGenVertexArraysAPPLE(1, vaos);
    glBindVertexArrayAPPLE(vaos[0]);
    glGenBuffers(1, &buffers[0]);
    glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
    glBufferData(GL_ARRAY_BUFFER, vertices_with_face_normals.size()*6*sizeof(float), &vertices_with_face_normals[0], GL_STATIC_DRAW);
    
// Set up vertex positions
glVertexAttribPointer(0, 6/2, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);
 
// Set up vertex normals
    glVertexAttribPointer(1, 6/2, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(6/2 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);
 
// Transfer index data to GPU
glGenBuffers(1, &buffers[1]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, triangle_indices.size()*3*sizeof(GLuint), &triangle_indices[0], GL_STATIC_DRAW);

 

You should not need to use the methods ending with *APPLE. Try to remove *APPLE and see if it works.

Share this post


Link to post
Share on other sites
glutInitDisplayMode(GLUT_3_2_CORE_PROFILE|GLUT_RGBA);

This a single-buffered GL context and that's the first thing you need to fix - create a double-buffered context instead and replace the glFlush call with the appropriate swap buffers call (glutSwapBuffers for GLUT) and see if that resolves it.

 

The reasoning here is that single-buffered contexts don't always play nicely with modern desktop compositors.  Yet you often see it in older tutorial code, because the code dates back to a time when video hardware did not necessarily support double-buffered contexts.  That time was the mid-1990s so it's a practice that needs to die - all video hardware today supports double-buffered and does so more robustly than single-buffered.

Edited by mhagain

Share this post


Link to post
Share on other sites

The *APPLE calls are needed on Mac OS X El Capitan. I'll look at the other replies later. Thank you all for your help.

Share this post


Link to post
Share on other sites

The *APPLE calls are needed on Mac OS X El Capitan. I'll look at the other replies later. Thank you all for your help.

 

I use them on El Capitan without *APPLE suffix.

Share this post


Link to post
Share on other sites

 

The *APPLE calls are needed on Mac OS X El Capitan. I'll look at the other replies later. Thank you all for your help.

 

I use them on El Capitan without *APPLE suffix.

 

 

 

 

That's strange. When I compile it gives me errors and tells me to use the APPLE functions. I'm compiling in the terminal using g++.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!