Sign in to follow this  
serratemplar

OpenGL Is the accumulation buffer supported on all hardware?

Recommended Posts

That's pretty much the long and short of it: I've got a game that runs in Linux on two machines and on three Windows machines, all different video cards, and it all looks good. On my MacBook Pro, no go. On the MacBook it runs at less than 3fps, while averaging 30fps on the other aforementioned systems. When I turn off the accum buffer however, it flies at well over 30fps (like, topping out at 60). My thinking is that its not a hardware issue...but that I'm not doing something right with initialization. To that end I've run the code in both GLUT and in an SDL skeleton I hacked up to make it work (from NeHe)...again, both work well on all systems other than my MacBook Pro. The MacBook is brand new...like literally a week out of the box. I run World of Warcraft on it, all settings maxed out, and it looks very good and gets a good framerate (40fps or so), but I don't know if WoW uses the Accum buffer, and none of the demos I've run (which all run well) purport to do so. Is this a hardware issue? Or am I possibly not initializing the accum buffer properly? It looks like in GLUT it's just a flag to pass to the init function, and in SDL you just set the size for each channel and you're done. Someone suggested I use "straight AGL", and to that end I've been looking for documentation for Apple's OpenGL thing. I'm still looking. I thought it couldn't hurt to post for help here though. Thank you in advance.

Share this post


Link to post
Share on other sites
Accumulation buffers are pretty standard - it is possible the the macbook doesn't have one, but I'd be surprised. I can't help you more without a rundown of the hardware specs on each machine - I would look at what type of card your windows box has vs your macbook. Another potential source of slowdown is the way you're drawing code works. One common mistake is to use glVertex to send each point or vertex to the video card - doing that will cause your code to run horribly slow.

Share this post


Link to post
Share on other sites
I used display lists for everything.

I can get you those specs once I get home, I think I'll have time. I have a tool on my Mac (OpenGL Profiler it's called) that lists all of the capabilities of my card, and while it's a big list, I don't see the letters "accum" anywhere. I don't have a similar tool for my other systems (which run in Windows and Linux), so I'm not sure if this Profiler just doesn't list the accum buffer (since it's standard), or what.

Thanks for the help =)

Share this post


Link to post
Share on other sites
They've all got ATI cards (I'd give you more but I didn't make it home this evening), and expensive ones. The one in the laptop is actually no longer supported (it's an ATI rage card) and it runs the code fine.

Not only does the code on the Mac run abyssmally slowly, but the picture is like jaggy swiss cheese instead of, well, the solid scene I see on the other systems.

The ATI in my desktop has has 512MB, and the one in my MacBook has 128MB, it's an ATIX1600.

Share this post


Link to post
Share on other sites
This problem intrigued me for some reason (I'm not even a MacBook guy). I googled around for you and found this at the apple developer's site:

http://developer.apple.com/technotes/tn/tn2014.html


The interesting bit is:
Quote:

Buffer Allocation on Mac OS 9 and OS X

OS 9 allocates buffers on a per-context basis; each next context that is created has its own set of independent buffers that are exclusive to that context. Buffers can be shared among contexts on Mac OS 9 by specifying context to share with in aglCreateContext().

Stencil, AUX (auxiliary) and accumulation buffers are not supported in hardware on Mac OS 9.

Mac OS X allocates buffers on a per-surface basis; a "surface" is the equivalent of a drawable on Mac OS X. You may create and attach multiple contexts to a single surface in Mac OS X. All contexts created on a given surface will share the same set of buffers.

Mac OS X also supports AUX (auxiliary) buffers and an 8-bit stencil buffer. Accumulation buffers are not supported in hardware on Mac OS X.



Be advised this is from March of 2001, and if I recall correctly Mac OS X is still the latest mac? (please don't kill me if I'm wrong!)

Hope this is informative in some way for you...

Share this post


Link to post
Share on other sites
Mac OS X is in fact my OS; it's 10.4 now, and I would like to have thought they'd fixed such a thing, but so far experimentation suggests they have not. Thanks for finding that for me, disappointing though it is =
Ah well. Plenty of other powers to play with. I suppose I can live without the accumulation buffer for now =)

Share this post


Link to post
Share on other sites
Definitely don't give up yet though, my source is so old, it'd just be awkward if they still didn't fix something like that (although from what I hear, accumulation buffer isn't always necessary and is slow anyways.)

Go to a mac forum that deals with OpenGL programming, and post this question.

Then post your findings here so others can see it :-)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
      627642
    • Total Posts
      2978354
  • Similar Content

    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
    • By cebugdev
      hi guys, 
      are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
      Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic 
      let me know if you guys have recommendations.
      Thank you in advance!
    • By dud3
      How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? 
      Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
      Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
       
      References:
      Code: https://pastebin.com/Hcshj3FQ
      The video shows the difference between blender and my rotation:
       
    • By Defend
      I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
      My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
      * make lots of VAO/VBO pairs and flip through them to render different objects, or
      * make one big VBO and jump around its memory to render different objects. 
      I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
      If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
       
    • By test opty
      Hello all,
       
      On my Windows 7 x64 machine I wrote the code below on VS 2017 and ran it.
      #include <glad/glad.h>  #include <GLFW/glfw3.h> #include <std_lib_facilities_4.h> using namespace std; void framebuffer_size_callback(GLFWwindow* window , int width, int height) {     glViewport(0, 0, width, height); } //****************************** void processInput(GLFWwindow* window) {     if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)         glfwSetWindowShouldClose(window, true); } //********************************* int main() {     glfwInit();     glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);     glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);     glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);     //glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);     GLFWwindow* window = glfwCreateWindow(800, 600, "LearnOpenGL", nullptr, nullptr);     if (window == nullptr)     {         cout << "Failed to create GLFW window" << endl;         glfwTerminate();         return -1;     }     glfwMakeContextCurrent(window);     if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))     {         cout << "Failed to initialize GLAD" << endl;         return -1;     }     glViewport(0, 0, 600, 480);     glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);     glClearColor(0.2f, 0.3f, 0.3f, 1.0f);     glClear(GL_COLOR_BUFFER_BIT);     while (!glfwWindowShouldClose(window))     {         processInput(window);         glfwSwapBuffers(window);         glfwPollEvents();     }     glfwTerminate();     return 0; }  
      The result should be a fixed dark green-blueish color as the end of here. But the color of my window turns from black to green-blueish repeatedly in high speed! I thought it might be a problem with my Graphics card driver but I've updated it and it's: NVIDIA GeForce GTX 750 Ti.
      What is the problem and how to solve it please?
  • Popular Now