Archived

This topic is now archived and is closed to further replies.

AHa

OpenGL OpenGL: How to detach frame rate form refresh rate

Recommended Posts

zedzeek    529
this is os specific
in windows u can disable it in the display properties tab, also theres an extension win_swap_interval (see my site for a demo)

http://members.xoom.com/myBollux

Share this post


Link to post
Share on other sites
PowerBoyAlfa    122
If you are referring to the fact that when you use fullscreen exclusive mode that the speed of you rendering matches the speed of your refresh rate. The reason is simple. It is because of the surface option to wait for the verticle refresh before flipping the back buffer to the primary buffer. If you were to ignore this, your program would traverse though your code at full speed without stopping, but you would get a nasty flashing glitches in your graphics (fine for testing). To wait for your verticle refresh is to bring your main graphics rendering loop to a screeching halt until your refesh has flipped the surface. It is okay however, since when the flipping is done, your code blasts through the loop until it gets back to the flip surface routine again. The only way to have code run constantly without stopping for the flip is to run a separate thread.

Share this post


Link to post
Share on other sites
Xtreme    122
Actually I''ve noticed a number of things when running Win2k vs. WinMe.

In Win2k, when i run my demo in full screen mode, the FPS value cannot pass the refresh rate. However, running it in windowed mode, it does.

This problem does not occur under WinMe.
Also, i noticed the FPS is lower in fullscreen compared to windowed when running under Win2K. Again this problem does not occur in WinMe.

-Xtreme.

Share this post


Link to post
Share on other sites
Poya    123
Sorry but I have had this problem for a long time and i think it is related to the same issue. I notice in my games, there are set frame rates i can get. eg normally i get a frame rate of ~72, but there is a certain point beyond which adding 1 more poly causes frame rate to drops to 35, then again 24 and... I heard that this has to do with refresh rate or something. Could anyone explain to me why and what this is? is there a way to force a more gradual drop in framerate? Oh and i have noticed that this kindda doesn''t happen in Quake 3.

Thanks

Share this post


Link to post
Share on other sites
With VSync on you will only get factors of the refresh rate. With a 60Hz refresh rate, you will only get 60FPS, 30FPS, 15FPS and so on. You can''t get a more gradual drop unless you turn VSync off which is undesirable.

Share this post


Link to post
Share on other sites
Maximus    124
quote:
Original post by Xtreme
In Win2k, when i run my demo in full screen mode, the FPS value cannot pass the refresh rate. However, running it in windowed mode, it does.



Strange, I can run games in full screen with vsync disabled and get a higher fps than my refresh rate. Running Win2k pro w/ a 32meg geforce sdr, leaked 10.80 drivers.

Share this post


Link to post
Share on other sites
Null and Void    1088
As was said in the previous post, I get higher numbers than my refresh rate in fullscreen in Win2K as well. Oh, and while I''m on this topic, I read a review about WinME vs. Win2K''s game performance, and Win2K was faster for almost every game except UT .

"Finger to spiritual emptiness underlying everything." -- How a C manual referred to a "pointer to void." --Things People Said
Resist Windows XP''s Invasive Production Activation Technology!
http://www.gdarchive.net/druidgames/

Share this post


Link to post
Share on other sites
Maximus    124
Put simply, the Unreal engine has problems. The software renderer effects the hardware renderer which causes some of the slowdowns in D3D and OGL. 3DRealms removed the software renderer from Duke Forever, and sped up the hardware renderer by doing so. Sure hope DNF does get released eventually, I soooo want that game! Even more so than the new Doom at the moment.

Share this post


Link to post
Share on other sites
AHa    122
My reply to all these topics questions and so on:

First i have worked with DirectDraw.. There it''s very simple to detach the framerate from the refresh rate.. You only set in the flip functio some code like this DDFLIP_NOVSYNC

and it works fine.. I get framerates above 1000 fps ) but you have some tearing effects...

So in opengl i think there is an option to do this also. First I somtime plays quake it''s OpenGL and it has a function in it to do this.

So I want to know that code )

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
I''m not really saying anything new here...

wglSwapInterval is your answer. wglSwapInterval(0) disables retrace synchronization and wglSwapInterval(1) enables it. Whatever you decide to use, wglSwapInterval _should_ override any driver settings, but don''t necessarily expect it to.

OpenGL was intended to be hardware-independent so things like retrace synchroniziation are handled by the OS, so don''t expect to have OpenGL do it. OpenGL is just for drawing stuff, not managing frame buffers or anything like that.

Personally, I''d rather use a hardware-indepentent API like OpenGL and not have to worry about setting up structures and querying stuff to the point that I have 20 pages of code before I can even draw something like with Direct3D. I may be wrong on this, but if anyone says that DirectX 8 eliminates that, I should point out that Microsoft just wrapped the old stuff in the same way that the OpenGL utility library does, and that''s something that''s existed in OpenGL long before Direct3D ever existed.

Share this post


Link to post
Share on other sites
zedzeek    529
MSDN is not very uptodate with opengl. have a look at www.opengl.org or my site under gl extensions

http://members.xoom.com/myBollux

Share this post


Link to post
Share on other sites
PowerBoyAlfa    122
Yes, the politically correct term is "verticle retrace" but there are numerous other terms, e.g. "refresh rate". It seems like we have been repeating ourselves a lot, so here is how it works. The Cathode Ray Tube (Monitor) has to draw one line at a time usually in a top-to-bottom motion. It is going fast enough that the human eye detects it as one steady image. But if you notice, the lower the refresh rate, the more flashy and blurry your image gets (tough on the eyes). It is recommened to run in refresh rates of 75hz or higher.

The reason you get "tearing" is because, if you decide NOT to wait for the refresh to finish drawing and start at the top, the image you place in your primary buffer instantly goes to the screen, even if the monitor isn''t done drawing the first frame, so you get part of one image at the top of the screen and part of another image at the bottom (ugly).

When you tell your code to wait for "verticle syncing", you are waiting for the frame in your primary buffer to be completely drawn to the screen and the CRT to move back up to the top of the screen to start drawing the next frame. It is at this point, the backbuffer is flipped to the primary buffer, the CRT begins drawing. and you code is free to put more data in the back buffer. Your CPU is much faster than your monitor, so it gets done doing everything, long before the monitor has finished drawing the last bit of data you flipped to the primary buffer. So when you code gets back to the flip function. It waits again until it has permission to flip the buffers. Repeated a million times or so.

Share this post


Link to post
Share on other sites
Refresh rate is NOT the same as Vertical retrace. The refresh rate is, basically, what it says - the rate at which the vertical retrace occurs. This of course is assuming you are talking about the vertical refresh rate. The vertical retrace is the actual event where the electron gun is repositioning itself back at the top of the frame.
Vertical sync is a term used to denote that you are in synchronisation with the vertical retrace, that is you wait for the vertical retrace before doing the flip.

Share this post


Link to post
Share on other sites

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now