Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 08 Aug 2005
Offline Last Active Mar 30 2015 10:46 PM

Posts I've Made

In Topic: Display Loop with Win32

01 December 2013 - 12:39 PM

Think I found my problem!


In my display() function, I was calling glGetUniformLocation() for the projection, world and texture locations.  That seems to be what blocks, not glBindBuffers() ( I had been overlooking that in the profile region ).


If I instead cache the uniform locations on init after linking the shader, I no longer stall in my display() function.  Now SwapBuffers() is what takes the 16.6 ms as expected.


I am a little surprised that getting the uniform location would block like that, but maybe it's documented somewhere...?

In Topic: Display Loop with Win32

01 December 2013 - 10:21 AM

Thank you Spiro, that clarification helps a lot.


The only thing that doesn't make sense then is the profiling I am doing.  I built a little profiling class using QueryPerformanceCounter(), and I am using it to record time taken for blocks of code.  My loop looks like this:

    if (PeekMessage(&msg,NULL,0,0,PM_REMOVE))
        if (msg.message==WM_QUIT)


Now I have a start/stop timer block around the display() function and the SwapBuffers() function.  If the main thread wait occurs inside SwapBuffers(), I would expect that the SwapBuffers() timer would take around 16 ms ( as my scene is very light, just two quads being drawn ).  But from my numbers, the SwapBuffers() call takes almost no time and the display() call takes around 16.6 ms.  That's why it seemed like something in my display() function was blocking, which lead me to glBindBuffers().


But maybe I have an error in my timer class, or I am making an invalid assumption here?

In Topic: Display Loop with Win32

01 December 2013 - 12:18 AM

Alright, dug a little deeper:


I did some more timing to see where the block occurs in my display function.  It's not in glClear(), glUseProgram(), or any glUniform...() calls.  It appears that the wait happens when I call glBindBuffer() to activate my vertex data.


This to me sounds like the buffer resource is being locked internally while the GPU is using it to render.  This is actually undesirable for me, as I definitely want to be able to start writing draw commands for my current frame while the GPU is working on the last frame.  If I'm right about this, is there any way to avoid the stall?  Since I don't need to modify that data, it seems like there ought to be a way to use it for rendering without trying to lock it...

In Topic: Display Loop with Win32

30 November 2013 - 07:38 PM

Well, it's not that I want to control the VSYNC itself, its that I want to control when my thread is blocked by it ( or at least know exactly when it is going to occur ).  And that's the funny thing; I assumed it would occur on SwapBuffers, but it seems like it's actually happening somewhere in the OpenGL calls based on my profiling.  Really I just need to know what function is going to block as that will affect how I synchronize with other systems in the simulation.

In Topic: Display Loop with Win32

30 November 2013 - 04:37 PM

Thanks for the responses.  An update:


- I do have a double buffered Render Context, I apologize for not mentioning that initially.


- I tried removing the glFlush() call, which seemed to have no effect at all.  This surprised me, as I'm used to flushing my graphics pipe before any accumulated commands are sent.  Is this not required in OpenGL?


- I tried setting wglSwapInterval( 0 ) as you suggested Spiro, and that definitely behaves more like I was expecting.  Now I get far more than 60 iterations per second, with frame averages of around 0.15 ms.


So this makes sense, assuming that some process is waiting for the next VSYNC to actually swap the display buffers ( with the interval set to 1 ).  But what I am wondering is; where is my thread getting blocked exactly?  From my profiling it seems like it is happening during my display functions ( I could narrow that down further ... ), which is entirely OpenGL calls ( the SwapBuffers() call comes after ).  Does OpenGL just wait on the next call if a buffer swap has been requested that hasn't completed yet?


Just for clarity: the reason I'd like to know is so that I can control the VSYNC wait myself.  I actually want to be limited to 60 fps, but I'd like to be able to manage where that wait occurs in my render loop.