Jump to content
  • Advertisement
Sign in to follow this  
draconar

why vsync is so costly?

This topic is 3158 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Got myself wondering. I got a fair video card Radeon 4670 - which is not top of the line, but I can play the games I want - and it seems that together with anti-aliasing, vsync comes as second as the most costly operation for this video card (for usual settings, that's it). Why is that? Anti-aliasing I can understand why it is costly to render... but what is up with vsync? Math inspired answers are welcomed. :D

Share this post


Link to post
Share on other sites
Advertisement
It's time costly because it waits until your monitor can display the next image. Most monitors do not refresh at faster than 60 frames per second (that's LCD standard IIRC). So that means your game will be capped at 60fps.

This, by the way, is almost always desired because if you display at faster than the monitor can handle you run into a phenomenon called "tearing". Basically the monitor starts drawing a frame but before it can finish you've changed the framebuffer, so as the monitor continues to draw the frame it's now using newer data for the rest of the frame. So you get parts of each frame actually presented to the monitor.

If you're running above 60fps, there's really no point in trying to profile and optimize your code anyway. So it's awesome that you are already looking at understanding a profiler, but just wait to use it until you actually have a need.

If you're just curious where your app is spending time, disable vsync and run the profiler. again, it's not really useful data unless your game is running slow; because if it's already fast, there's nothing to do [smile]

-me

Share this post


Link to post
Share on other sites
Because it makes your application wait.

60fps = 16.6ms.
Rendering = 10ms.
Wait for vsync = 6.6ms

Rendering = 1ms
Wait for vsync = 15.6ms // 15 times "costlier" than rendering

Share this post


Link to post
Share on other sites
Quote:
Original post by draconar
Got myself wondering. I got a fair video card Radeon 4670 - which is not top of the line, but I can play the games I want - and it seems that together with anti-aliasing, vsync comes as second as the most costly operation for this video card (for usual settings, that's it).

Why is that? Anti-aliasing I can understand why it is costly to render... but what is up with vsync?

Math inspired answers are welcomed. :D


v-sync is fairly cheap, you have to consider what it does though, it synchronizes the graphic cards updates with those of the monitor, if your monitor updates 60 times per second (most do by default) you will never get more than 60 fps with v-sync active. (It can cause you to get less though)

Share this post


Link to post
Share on other sites
VSync itself isn't costly; that's just what it's meant to do - pause your application when required such that it displays at the monitor's refresh frequency (usually 60Hz/FPS).

I don't recommend using VSync. You might not notice a visual difference, but a game will definitely feel a lot smoother at an FPS higher than 60 because it will be able to handle input and logic updates a lot more frequently.

Share this post


Link to post
Share on other sites
Quote:
Original post by nullsquared
VSync itself isn't costly; that's just what it's meant to do - pause your application when required such that it displays at the monitor's refresh frequency (usually 60Hz/FPS).

I don't recommend using VSync. You might not notice a visual difference, but a game will definitely feel a lot smoother at an FPS higher than 60 because it will be able to handle input and logic updates a lot more frequently.


You can run input and logic at a frequency of higher than 60Hz regardless of vsync on or off. However, if you don't flip at VSync points you will get tearing artifacts, which (IMO) look terrible

Share this post


Link to post
Share on other sites
Why do intersections make it take so much longer to get to work? What's so hard about driving through the intersection?

Answer: nothing. It's a quite short distance. But there's this thing called a traffic light, that makes you wait for a green signal - and for good reason; otherwise, things could get quite messy.

Vsync means "wait for the monitor to finish lighting refreshing all the pixels before sending it the next image".

If you turn off the vsync, parts of different frames could get drawn on the same trip. That's usually not a major issue. But on the other hand, there's no reason you need to draw any faster than the monitor. It won't help; the monitor is going as fast as it can.

And make no mistake: in computer terms, a monitor is a slow device. In the time that my screen updates a single pixel, my CPU performs more than 20 cycles, on each of 4 cores.

Share this post


Link to post
Share on other sites
Quote:
Original post by RDragon1
You can run input and logic at a frequency of higher than 60Hz regardless of vsync on or off. However, if you don't flip at VSync points you will get tearing artifacts, which (IMO) look terrible


How so? Do you mean if you use a separate thread for logic/input? Indeed, but if your application is single-threaded, then that thread will have to wait for VSync.

Also, tearing is solved by double-buffering. I don't think anybody draws directly to the screen anymore these days (which is when tearing would occur if the FPS is over the monitor's refresh rate).

Share this post


Link to post
Share on other sites
Quote:
Original post by nullsquared
Also, tearing is solved by double-buffering. I don't think anybody draws directly to the screen anymore these days (which is when tearing would occur if the FPS is over the monitor's refresh rate).
No. As an example; write a D3D app that does nothing more than call Present() with v-sync disabled (With no clear), and use the debug runtimes. You'll see a hell of a lot of tearing there; each magenta or green bar is a new frame.

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
No. As an example; write a D3D app that does nothing more than call Present() with v-sync disabled (With no clear), and use the debug runtimes. You'll see a hell of a lot of tearing there; each magenta or green bar is a new frame.


"In computer graphics, double buffering is a technique for drawing graphics that shows no (or less) flicker, tearing, and other artifacts."

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!