Sign in to follow this  
XTAL256

OpenGL When to draw and swap buffers

Recommended Posts

I am currently making the graphical user interface (GUI) for my game using OpenGL (C++, VS2005). I have Window objects which have an array of Component objects. These can be Buttons, CheckBoxes, etc. Each component (Window is also a subclass of Component) has a draw() function which draws the images. For testing, i just call the draw function for my window in the WinMain loop:
while(!done) {
    if (PeekMessage(&msg,NULL,0,0,PM_REMOVE)) {
        if (msg.message==WM_QUIT) {
            done = TRUE;
        }
        else {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
    }
    else {
        clearScreen();
        // Test
        screen.draw();
        SwapBuffers(hdc);
    }
}

Don't worry about what that code does, it's not really important. What i want to know is where to put SwapBuffers(hdc);. If i draw everything in the loop, it will use 100% CPU. I want to only draw when an event occurs such as moving the mouse over a button or pressing a key. I have made an event system which using Windows' WNDPROC function to get events then passes those events to all Components that listen for events. Now, i only want to call draw() for each component when it gets an event. But how do i know when to swap buffers? If this cannot be done i could have two separate graphics contexts; one double buffered for drawing my game scenes, the other single buffered for drawing the GUI.

Share this post


Link to post
Share on other sites
It can be a bit hard to update the buffer as you'll need a pretty good knowedge of damaged area so that when the UI is updated, you can erase the dirty area (by replacing it by "game" content), and then draw the updated area. In terms of code complexity, this approach needs careful attention as it can easily lead to dirty code (yes, I've been there :-)

An easier approach is to use offscreen buffer (fbo) and render the game content in one of them and the UI is the other. That way you are making the update event much coarser: "update game" or "update UI".
The entire scene is rendered by compositing the two layers.
That's more expensive than the first solution but so much easier to implement.

Another drawback of the second approach is the "double-buffer" issue. Basically you'll be tempted by duplicating ressources to hide more of the hideous latency. For instance, you might want to double buffer the "game content fbo".
That might turn out to be a good idea but I suggest to start with everything thing singled-buffered (expect the main buffer) and eventually optimize afterward.

Share this post


Link to post
Share on other sites
I don't really know how to use offscreen buffers but would it be just as good to create two separate rendering contexts (HDC and HGLRC). 'cos that's easy, i can just make my graphics class non-static and create two objects of the class.

Share this post


Link to post
Share on other sites
I would keep that code and just add sleep(howmuch) and continously measure the FPS and adjust howmuch until I get 60FPS.
That will reduce the CPU usage to 1-5%.
I don't recommend that while the game is running! Only do this for your game menu.

Share this post


Link to post
Share on other sites
I thought i would try using a single buffer for the UI just to see what it's like but i can't get it to work. I removed the PFD_DOUBLEBUFFER flag from the PIXELFORMATDESCRIPTOR and removed the call to SwapBuffers() but nothing gets drawn. How do i do it? btw, I am using Windows and OpenGL.

Share this post


Link to post
Share on other sites
I suggest you redraw the GUI when something happened. Just redraw it completely. So whenever the mouse moves, or keys get hit. You can also filter events that have no effect to the GUI, but thats up to you. Only drawing when events are thrown reduces CPU dramatically (except if the user is moving the mouse insanely much)

This is what I do with SDL combined with OpenGL.

I'm working on the GUI aswell, and my code became LONG. Now I decided to make external text files which will decide the GUI's content, only need a parser and display functions, instead of endless code hardcoding button positions etc. The GUI is taking more time already than my game engine ^^ (using a 2D tile-based game).

PS: whats so bad in delaying 1 ms every frame in-game? I made it an option in this manner:

Normal - Low CPU - VSync

where users can choose one of them...I prefer the latter 2 though...

Share this post


Link to post
Share on other sites
Quote:
Original post by Decrius
I suggest you redraw the GUI when something happened. Just redraw it completely.

Yes, that's what i am trying to achieve. Except i was only going to draw the component that changes, not because it better but actually because it's easier. My event code calls a function in each component that listens for events. So when such an event occurs (say, clicking the mouse), each button gets that event and then calls it's draw() function. Now if i were to completely redraw everything, i would have to know what components are currently on the screen. It can be done but i would have to change my code a bit.

Quote:
Original post by Decrius
Now I decided to make external text files which will decide the GUI's content...

I'm just hardcoding my button positions and stuff but i am using an XML file to describe the GUI styles, fonts, etc. Some people told me that it's overkill and that i can just hardcode all that stuff in but i find it much easier. Not only can i easily change something without the need to re-compile but other users can create their own custom GUI theme/style, which is a feature that i personally like about a program.

Quote:
Original post by Decrius
PS: whats so bad in delaying 1 ms every frame in-game?

Well, i haven't even started coding the actual game (i haven't even though about how i will do it). But i would like to have a function that does all my game rendering and stuff, and that function gets called by a timer 30 times per second (for 30fps). I don't want to draw my GUI in that function because it won't even be running if game hasn't started and the user is just in the menu. I want my GUI and game to be separate.
But if that is too hard then i will just do it all in one place and delay 1ms or something.

Share this post


Link to post
Share on other sites
Hi again. I have decided that could swap the buffers after everything i draw, and if i don't clear the screen i only have to redraw the components that need redrawing. At first i didn't think this would work but double buffer or not i can still draw only when an event occurs, i just can't draw after i swap buffers cos that's when you get flickering. So all i have to do is put SwapBuffers(hdc) after i draw a component.
Also, is it ok to do drawing in two separate threads? Is there a good way of doing this? Because i am planning on using a timer to update the drawing of the game and GUI drawing will be done in the event thread.

EDIT: doesn't work. All i get is a black screen (from initially clearing the screen). Event functions are called, which draw stuff and call SwapBuffers, but nothing. If it's possible to do it this way (which i want to do), i can upload the full source so you can check it out.

[Edited by - XTAL256 on May 9, 2008 8:27:03 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by XTAL256
I thought i would try using a single buffer for the UI just to see what it's like but i can't get it to work. I removed the PFD_DOUBLEBUFFER flag from the PIXELFORMATDESCRIPTOR and removed the call to SwapBuffers() but nothing gets drawn. How do i do it? btw, I am using Windows and OpenGL.


haven't you forgot to glFlush(); when you deleted buffer swapping?

Share this post


Link to post
Share on other sites
Hmm, when do you need to use glFlush? And why don't you need to do that when double buffering?
Anyway, i have decided to just do what V-man and Decrius suggested and just render every 30ms or so. I am currently having problems with that but i should be able to get it sorted out.

EDIT: btw, glFlush worked for single buffering, thanks. And, strangely, there was no flickering or anything, is that because of OpenGL doing something? Anyway, depending on how things go, i may use single buffering for the GUI and double buffering for the game.

Share this post


Link to post
Share on other sites
The idea is that any drawing you make in your drawing function is no happening directly on the screen, you are in fact altering the buffer. So, when you're done with drawing, you need to output that buffer onto the screen, which is what is called flushing. When you are using double buffering you don't need to flush the buffer, beacuse the SwapBuffers function does it for you.
That is why when you use single buffering, you need to do that manually or nothing will be drawn, everything will stay in the buffer.

Share this post


Link to post
Share on other sites
Quote:
Original post by XTAL256
Anyway, i have decided to just do what V-man and Decrius suggested and just render every 30ms or so. I am currently having problems with that but i should be able to get it sorted out.

Ok, i need help [grin]. I am using a timer (SDL) to render each frame but for some reason nothing is being drawn (just garbage on the screen).

UINT render(UINT interval, void* param) {
clearScreen();
Component::drawScreen(); // Draw the GUI on screen
SwapBuffers(hdc);
return interval;
}

thread::Timer timer(render, 30);

int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst, LPSTR cmdLine, int wndState) {
MSG msg; // Windows message structure
BOOL done = FALSE; // Bool variable to exit loop

// Create our OpenGL window, initiate SDL components and graphics
try {
initWindow(false);
initSDL();
initGraphics(hdc);
}
catch (Error e) {
displayError(e);
done = TRUE;
}
guiLoadData();
PlayerScreen screen;
timer.start(); // <-- i start the timer here
while(!done) {
if (PeekMessage(&msg,NULL,0,0,PM_REMOVE)) {
if (msg.message==WM_QUIT) {
done = TRUE;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
}
// shutdown stuff
}

/// Timer.h ///////////////////////////////////////
class Timer {
public:
SDL_TimerID id;
unsigned int interval;
bool isRunning;

Timer(UINT (*funcPtr)(UINT, void*), int interval, bool run = false);
~Timer();

UINT (*funcPtr)(UINT, void*);
void start();
};

/// Timer.cpp /////////////////////////////////////
Timer::Timer(UINT (*funcPtr)(UINT, void*), int interval, bool run) {
this->funcPtr = funcPtr;
this->interval = interval;
this->isRunning = run;
if (run)
start();
else
this->id = NULL;
}

Timer::~Timer() {
SDL_RemoveTimer(id);
}

void Timer::start() {
this->id = SDL_AddTimer(interval, funcPtr, NULL);

if (this->id == NULL)
throw TimerError("Could not initiate timer");
}


I have tested the timer and it works, it calls render after i start it. And the drawing code works fine if i have it in the while loop (as an 'else' to 'if(PeekMessage...)').
Is it ok to call OpenGL operations from a separate thread/timer?

Share this post


Link to post
Share on other sites
I just tested the render function running in an SDL_Thread and still nothing, just a blank screen.
Ok, i just realised what's happening. When i was doing some debugging before i noticed that code in the main while loop was being executed before code in the thread. I guessed that the loop was receiving a lot of messages like WM_CREATE (?) and that had priority over the thread (or had equal priority but was scheduled first). Anyway, i realised that i started the thread before the loop was entered. So although i started it after OGL and window was initialised, well i don't actually know what is happening. But what i did was put timer.start() after the loop, and when i quit i saw a little flicker of my GUI being drawn. So when do i actually need to start rendering the screen?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628290
    • Total Posts
      2981858
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now