# OpenGL Multi-GPUs AFR vs frame lag & smooth animation

## Recommended Posts

Both ATI & Nvidia offer multiple GPUs solutions for some years now (Crossfire & SLI). There are three common modes of operation available: - Super AA or SLI AA mode (for improved image quality with increased AA); - Scissor or split frame rendering (SFR) mode; - Alternate frame rendering (AFR) mode. As outlined in some talks from both IHVs, for a modern game engine that requires some sort of post-processing effect (all of them these days) the only usable mode is AFR. Note that even using AFR, care must be taken on when render-targets (DirectX) or render buffers (OpenGL) are updated and cleared. Here are a few tips from vendors: - To avoid GPU starvation make sure frames are buffered at least 2 frames. I think default is 3 for Nvidia and 2 for ATI. This way, at the end of a frame (SwapBuffers with OpenGL), the CPU does not wait for the end of one GPU processing before starting over new commands for the other GPU. - Disable VSync, swap on V sync or wait for vertical refresh to maximize FPS if tearing is acceptable. - Never call glFinish (OpenGL) since this would kills the asynchronous work between the CPU and GPUs. The CPU would have to wait for the end of one GPU processing before submitting any new commands to the other GPU. Obviously those tips will maximize parallelism between the CPU issuing commands and GPUs. But how is it possible to render a smooth animation this way? If the CPU is totally asynchronous with the GPU, then the CPU has no idea at the time to compute positions when the actual frame will be displayed on screen. For my application tearing is not an option, so at least this way I know that frames are displayed at a constant rate (the display refresh rate). But since frames are still buffered, CPU still doesn’t know when the computed frames will be displayed!? The lag may also become a problem… with 3 frames buffered and 2 frames being the maximum time taken by a GPU to render a scene, the latency could max out to 5 frames! @ 60 Hz this would mean 83 ms… I found that on Vista it is possible using DirectX 9 or 10 to call a function named WaitForVBlank that would solve my problem. Does anybody found a way to address this in Windows XP with DirectX or using OpenGL? /Cubitus

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627701
• Total Posts
2978709
• ### Similar Content

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• 21
• 14
• 12
• 10
• 12