OpenGL Hybrid vsync

Recommended Posts

Quote:
 Just for the record - for my personal, immediate purposes, the following setup seems to work fine :) if( dt>24) { if(vsync) { wglSwapIntervalEXT(0); vsync=false; } } else if( dt<12){ if(!vsync) { wglSwapIntervalEXT(1); vsync=true; } } SwapBuffers();dt being the time between the start of the last render, and the start of the current - ie. one frame lag.
Is this still helpful if triple-buffering is enabled?

anyone?

Share on other sites
Quote:
 Original post by PruneIs this still helpful if triple-buffering is enabled?

Define "helpful". In your application when you try it do you have better visual results? If so, yes, if not, no.

-me

Share on other sites
What is that code supposed to?(Fixing frame rate?) And why do you think it works? Depending on the driver settings wglSwapIntervalEXT may or may not affect v-sync.
Also as end used I would not happy to see an application messing up with v-sync like that since I often want it to be enable to overcome screen tearing.

Share on other sites
I don't think that snippet is any helpful at all, and has never been. At least, not without a lot of "ifs" and "whens".

The problem with VSYNC is that it will only allow refresh/N frames per second (so, in the case of a 60Hz refresh, it will allow for 60, 30, 20, 15, 12, 8, ... fps). On the positive side, it avoids tearing artefacts, and for many games 1/2 the refresh rate is still quite acceptable.

Now, it may happen that your rendering is a bit too slow, maybe running at just 59 fps, which would be acceptable, but due to VSYNC it will drop to 30 fps which is not acceptable (maybe in a fast-paced ego-shooter).

This code snippet tries to deal with the problem by collecting some more or less reliable data (last frame time), compare it to some hardcoded value and turn VSYNC on/off on the fly.

Here's is the first flaw already. Not only is 24 ms a really weird frame time, this corresponds to something like 42 fps -- few if any devices will have a refresh rate like this -- but also you don't really know the real refresh rate, so hardcoding it is a kind of stupid idea. Typical monitors can have refresh rates going from 50 to 120 Hz with maybe a dozen steps in between. So... 24 may be good for one particular monitor, but that's about it.
Also, different games have different fps needs, so regardless of the actual hardware, this number may not be the one you want. For a top-down strategy game, 15 fps may be perfectly acceptable. For a fast ego-shooter, 60 fps may not be enough.
Plus, measuring the frame time in a reliable and precise way is not quite trivial, either. And, whatever you measure is (obviously) after the fact. It may be better to avoid the problem in the first place.

I think the right solution here is to let the user choose. Usually I hate it when people just play the "let the user choose" card, but in this case, I think it is really justified.
Not only is this much less work and trouble, but it also is a lot more reliable. And, it accounts for the fact that different people are... well... different. Some people maybe don't mind a bit of tearing but really hate low frame rates. And then, tearing might annoy the hell out of others. You don't know.
Lastly, you don't even know if you can change VSYNC at all. Most drivers allow the user to turn VSYNC on/off globally, so whatever you do might not be good for anything after all.

Share on other sites
Quote:
 Just for the record - for my personal, immediate purposes, the following setup seems to work fine :)if( dt>24) { if(vsync) { wglSwapIntervalEXT(0); vsync=false; }}else if( dt<12){ if(!vsync) { wglSwapIntervalEXT(1); vsync=true; }}SwapBuffers();dt being the time between the start of the last render, and the start of the current - ie. one frame lag.
That is very interesting. I had the same question with no answers a few months ago, and I came out with almost the same idea, but for some reason, I could never get it to work. V-sync never got switched on, or never got switched off, no matter what values I used it the else-if conditions (I also had the two different values to introduce some kind of hysteresis), just didn't work.
So I decided to go the user selection way, and the user can switch to lower resolution, decide how many particles are drawn etc.

Share on other sites
Quote:
 Original post by samothThis code snippet tries to deal with the problem by collecting some more or less reliable data (last frame time), compare it to some hardcoded value and turn VSYNC on/off on the fly.

Of course, it is trivial to extend to some statistic measure relying on a few past frames, rather than a single frame.

Quote:
 Here's is the first flaw already. Not only is 24 ms a really weird frame time, this corresponds to something like 42 fps -- few if any devices will have a refresh rate like this -- but also you don't really know the real refresh rate, so hardcoding it is a kind of stupid idea.

I am myself very curious about this choice. As I wrote in the first post, I copied this snipped from elsewhere. The following comment is one of the replies on that forum:
Quote:
 Mikkel Gjoel: you are right, you snippet works very well !This is really nice to the eye, even with black+white vertical stripes in variable high-speed horizontal scroll, my worst case as far as display refresh is concerned.I don't get why your first threshold value works so well, but I could not get better results with a smaller value. However 85 Hz monitor would mean around 11-12 milliseconds, not 24 right ?

There was no explanation in reply though; I myself posted in the thread but got no answer of any sort.
I cannot begin to imagine the reason for the choice, whether latencies for wglSwapIntervalEXT's effect come into play, or what... I'd love to hear some suggestions. Perhaps the choice is simply empirical, but given that a different random user also found this the smallest value that made a difference, it's unlikely that the writer of the snipped had some unusual refresh rate.

Quote:
 And, whatever you measure is (obviously) after the fact.

That's hardly specific to this issue. Consider a physics simulation where you're determining the delta-t. There is at least a one-frame lag, some smoothness of the dt variation over most frames is always assumed since you cannot predict discontinuous events that would cause a sudden change (since they usually result from user action or arise from system complexity). This is a general aspect of any feedback-based controller system, where the variable is not necessarily just time but could be temperature or whatever.

Quote:
 I think the right solution here is to let the user choose. Usually I hate it when people just play the "let the user choose" card, but in this case, I think it is really justified.Not only is this much less work and trouble, but it also is a lot more reliable. And, it accounts for the fact that different people are... well... different. Some people maybe don't mind a bit of tearing but really hate low frame rates. And then, tearing might annoy the hell out of others. You don't know.Lastly, you don't even know if you can change VSYNC at all. Most drivers allow the user to turn VSYNC on/off globally, so whatever you do might not be good for anything after all.

I'm lucky to be writing for a hardware and software platform that I specify (touchscreen kiosks), which removes these concerns. But even someone without that privilege can provide the user with such an option in addition to having vsync always on or always off, maybe with even tunable threshold dt. (By the way, the NVIDIA driver lets one to force vsync on and off, but also has a setting allowing the application to select it.) So I would really like to get a bit deeper into this issue in order to get best results.

This algorithm is, as you explained, geared to prevents halving of framerate for cases where some portion of frames might exceed one refresh time. The problem of course is that those frames would appear with tearing, so it's a specific compromise between lag to next frame display and visual artifacts.
This got me into thinking about the other solution, triple buffering, which presents a different compromise--adding a lag time of an extra frame to act as a safety margin for the cases where some frames take longer than a vertical retrace. But wouldn't it make sense, in cases where the draw takes less than half of a vertical refresh, to dynamically switch to double-buffering and thus improve latency? This would be somewhat analogous to the dynamically using vsync in the snippet above, and could be combined. But how can one turn triple buffering on/off from the application, given there seems to be no WGL/GLX extension as there is for vsync? Does NVIDIA have any way to ask the driver for it programmatically? Worst case, I'd imagine one could manually handle two back buffers in the application and turn triple-buffering off in the driver...

Then there is the question of how NVIDIA's driver setting of "Maximum pre-rendered frames" affects the above considerations...

Share on other sites
I'd love some comments on the triple-buffering stuff :)

Create an account

Register a new account

• Forum Statistics

• Total Topics
628316
• Total Posts
2982033
• Similar Content

• By mellinoe
Hi all,
First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
• By aejt
I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
I have these classes:
For GPU resources:
Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).
And my plan is to define everything into these XML documents to abstract away files:
Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
Factory classes for resources:
For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
Factory classes for assets:
Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).

Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
Thanks!
• By nedondev
I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
Thanks.

• So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!

• Hi,
I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code: