Sign in to follow this  
Oogst

OpenGL Switching PresentInterval after window creation

Recommended Posts

Oogst    481
Is there some way to switch the present interval dynamically after the window has already been created? I cannot find a way to do that in DirectX9. I can only find how to set it at startup. I know this can be done in OpenGL on Windows, and even in DirectX9 on the Xbox 360. So is there something similar in DirectX9 on PC?

Searching the forum, I found [url="http://www.gamedev.net/topic/424071-mdx-turning-vsync-onoff-without-recreating-swap-chain/"]this old topic[/url], but that suggests I need to reset things to switch VSync. Doing this takes some time, is that correct? Or can that be done at any time in between two frames without causing a framedrop because of the reset?

I know that quite a few games these days have VSync on by default and then dynamically shortly turn if off if the framerate drops too far. When the framerate is back to normal, vsync is then turned on again. How is this done on PC in DirectX9? Or is this only done on consoles?

Share this post


Link to post
Share on other sites
MJP    19791
To change VSYNC in DX9 you need to reset the device, which is really slow since you have to release all of your non-managed resources. So you can't change it frame-to-frame to get the "soft VSYNC" effect that you're referring to. In DX10/DX11 the sync interval is a parameter that you pass to Present, so it is possible to change it each frame. However even with that it's still difficult do a soft VSYNC, since you don't have the same amount of low-level timing info and control that you do on consoles. I'm not sure if any PC games have actually shipped with it, but if they did they were surely using DX10 or DX11. (The one notable exception is RAGE, which had Nvidia add a soft VSYNC option into the driver for them). Edited by MJP

Share this post


Link to post
Share on other sites
Erik Rufelt    5901
On Vista and Win7 D3D9Ex has a WaitForVBlank method, which I guess does the same as the DXGI version, though I haven't tried it.
You could also create a DirectDraw device that you use only for VSync. Check out WaitForVerticalBlank, [url="http://msdn.microsoft.com/en-us/library/aa911354.aspx"]http://msdn.microsof...y/aa911354.aspx[/url] or GetVerticalBlankStatus. Edited by Erik Rufelt

Share this post


Link to post
Share on other sites
Oogst    481
Okay, sounds like my best option then is to just not support dynamic vsync switching in my engine and move the vsync option from the in-game settings menu to the pre-game launcher.

Kind of funny how some features are impossible in one API, and no problem in another. Same OS, same hardware, but DirectX 9 can't do it and OpenGL can. Guess DX10/11 do have some useful improvements after all. [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img]

Thanks for the help, folks! Edited by Oogst

Share this post


Link to post
Share on other sites
Hodgman    51342
[quote name='MJP' timestamp='1345312105' post='4970875']
However even with that it's still difficult do a soft VSYNC, since you don't have the same amount of low-level timing info and control that you do on consoles.
[/quote]Does DX11 have a GPU timestamp read-back API, and a requirement for GPUs to support it? DX9 is lacking this, and GL has it via an extension (not sure which GPUs do and don't support the extension though).
Being able to stamp your frames to get a value on GPU processing time would be a great base-level API requirement ([i]like on consoles[/i]). Even if read-back is delayed, you can use a rolling average to get yourself out of trouble a few frames after vsync starts being consistently harmful.

Share this post


Link to post
Share on other sites
Oogst    481
[quote name='Hodgman' timestamp='1345447620' post='4971365']Does DX11 have a GPU timestamp read-back API, and a requirement for GPUs to support it? DX9 is lacking this, and GL has it via an extension (not sure which GPUs do and don't support the extension though).
Being able to stamp your frames to get a value on GPU processing time would be a great base-level API requirement ([i]like on consoles[/i]). Even if read-back is delayed, you can use a rolling average to get yourself out of trouble a few frames after vsync starts being consistently harmful.
[/quote]
Doesn't DirectX have some equivalent of OpenGL's fences? Fences are not exactly what you describe, but they do give some nice information on where the videocard is at the moment. Edited by Oogst

Share this post


Link to post
Share on other sites
Hodgman    51342
[quote name='MJP' timestamp='1345449778' post='4971374']
There are timestamp queries, which are actually available in DX9 as well.
[/quote]You've just blown my mind!
I swear that last time I looked at the local version of [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb147308(v=vs.85).aspx"]this page[/url] (inside the DirectX SDK's installed documentation), there was no timestamp query.
I was still under the impression that the only method of timing GPU usage under DX9 was the non-real-time, CPU-blocking, flush & finish method [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb172234(v=vs.85).aspx"]described here[/url].

On consoles, I basically use a ring-buffer of time-stamp queries to detect bad performance; the major check is using the deltas to calculate a rolling average of GPU-frame-time to see if there's consistently bad GPU performance. It seems I can implement this on DX9 as well?

Share this post


Link to post
Share on other sites
MJP    19791
[quote name='Hodgman' timestamp='1345451449' post='4971381']
[quote name='MJP' timestamp='1345449778' post='4971374']
There are timestamp queries, which are actually available in DX9 as well.
[/quote]You've just blown my mind!
I swear that last time I looked at the local version of [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb147308(v=vs.85).aspx"]this page[/url] (inside the DirectX SDK's installed documentation), there was no timestamp query.
I was still under the impression that the only method of timing GPU usage under DX9 was the non-real-time, CPU-blocking, flush & finish method [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb172234(v=vs.85).aspx"]described here[/url].

On consoles, I basically use a ring-buffer of time-stamp queries to detect bad performance; the major check is using the deltas to calculate a rolling average of GPU-frame-time to see if there's consistently bad GPU performance. It seems I can implement this on DX9 as well?
[/quote]

Yeah I had thought they were new for DX10, but someone else pointed out to me that DX9 has them as well. In my experience the query works pretty much the way you'd expect. Which of course means it has all of the usual latency problems with queries, as well as the "just what exactly am I measuring?" problem you have with reading GPU timestamps. Edited by MJP

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now