Sign in to follow this  
Four

Is it pointless to draw 512times/sec ...

Recommended Posts

Four    138
Hi, Is it pointless to draw 512times/second when a monitor is set to lets say 120hertz? If yes, does anyone do that? I see games where framerate is shown and its higher than the refreshrate set on my monitor. Im new to graphics programming. Thanks in advance!

Share this post


Link to post
Share on other sites
bytecoder    100
To answer your question: most of the time it isn't necessary. Those games are probably showing how many times they update per second, not how many times they draw per second.

Since most people call the update and render code at the same time, updating is slowed down to how fast you can render per second. The best solution to get a higher framerate would be to seperate updating and rendering code.

EDIT:
By seperating updating and rendering code I mean that you should call the updating code as much as possible, and only call the rendering code, say 60 times per second.

Share this post


Link to post
Share on other sites
Trap    684
You first need to know how displaying a rendered frame works.

There is a ram on the graphiccard which row by row gets transmitted to the monitor. The refresh rate of the monitor is the number of times the complete ram is send to the monitor.

Now you have 2 options, either you only change the contents of the ram at the exact moment the transmission of it is finished (=VSYNC on) or you just overwrite it any time you want (=VSYNC off).

If you overwrite it during transmission the monitor displays a part of the old content together with a part of the new content.

Share this post


Link to post
Share on other sites
mattnewport    1038
Short answer: yes it's pointless.

Longer answer: it's not uncommon for games to run at greater than the refresh rate, mainly as a benchmarking aid. If you want to know how powerful a particular graphics card is or whether the latest optimization you've made has actually helped speed rendering up you want to render frames as fast as possible. If you always locked the frame update to the refresh rate of the monitor you wouldn't be able to tell whether you were rendering everything with cycles to spare or barely keeping up with the refresh rate. For actual gameplay you'll typically want to sync to the refresh rate as it makes for smoother visuals with no tearing and there's no benefit to the player to rendering faster than their monitor refresh.

Share this post


Link to post
Share on other sites
Soiled    286
I've read that some players of FPS shooters don't sync and turn the graphics detail down alot to get high frame-rate - presumably because they play with high sample rate input devices (the input devices typically get sampled once per frame), but I don't know of any mouse devices that sample higher than 120Hz. However, I think the professional players want to guarantee that the minimum frame rate (during periods of intense action) is at least the input device sample rate so that they get the best input response possible.

Syncing does stop tearing but it can also mean abrupt changes in the frame-rate from monitor_refresh to monitor_refresh/2 to monitor_refresh/3, etc (eg, 90,45,30Hz) depending on the current load which can be annoying.

It should always be a user option.

Share this post


Link to post
Share on other sites
Aph3x    288
I would have stopped playing UT years ago if it insisted on limiting the framerate to the monitor refresh.
There *is* a difference, despite a lot of people quoting human persistence of vision and perception limits.

Share this post


Link to post
Share on other sites
DudeMiester    156
Umm... images that don't get drawn, don't make a difference. If you're rendering more frames then your monitor can present per second, then all those extra frames are either never seen, or mixed with another image (tearing). So things that you don't see because they arn't drawn on the sreen at all, don't make a difference.

Share this post


Link to post
Share on other sites
Shannon Barber    1681
Quote:
Original post by Aph3x
I would have stopped playing UT years ago if it insisted on limiting the framerate to the monitor refresh.
There *is* a difference, despite a lot of people quoting human persistence of vision and perception limits.


Any links? I play and I can absolutely can tell the difference between 60/70/90, especially in an alpha state.

Share this post


Link to post
Share on other sites
Monder    993
Well if the monitor refresh rate is say 60Hz there could be a noticible difference between say 60 and 70 FPS because when you're updating the screen faster than your refresh rate you'll get various artifacts, but these are hardly desirable.

Share this post


Link to post
Share on other sites
WarAmp    750
Rendering faster than the refresh rate is pointless. However doing Physics/Input updates faster than the rfresh rate is very much a good thing. The problem with VSync is that it blocks until the refresh happens, which means the whole application stalls until it can swap the buffers.

The 'correct' way to do it would be to keep track of time internally in your application, and only call Render() 60 times a second, but let AI/Physics and especially input happen as fast as possible. (or maybe limit those to happen 120 times a second)

Share this post


Link to post
Share on other sites
mattnewport    1038
Quote:
Original post by WarAmp
Rendering faster than the refresh rate is pointless. However doing Physics/Input updates faster than the rfresh rate is very much a good thing. The problem with VSync is that it blocks until the refresh happens, which means the whole application stalls until it can swap the buffers.

The 'correct' way to do it would be to keep track of time internally in your application, and only call Render() 60 times a second, but let AI/Physics and especially input happen as fast as possible. (or maybe limit those to happen 120 times a second)

There can be a benefit to doing Physics / Input updates faster than refresh and a loop that only calls Render() 60 times a second is one way of achieving it. You can also use separate threads for rendering and simulation or in D3D use the DONOTWAIT flag on Present() and do some other processing until the refresh is complete.

Share this post


Link to post
Share on other sites
Extrarius    1412
There isn't a point in drawing more frames than VSYNCs, but you _DONT_ want to lock to vsync, because it makes slowdowns EXTREMELY more noticable.
Let me explain why:

Your machine is running the game really fast, you're getting 120 FPS with time to spare and the vsync is 120Hz, a perfect match so youre game is running smooth. Now you get to a graphics intensive area, and the frames now take 1/119 of a second to draw. Your FPS is now 60, because you're drawing the frame (during which time you miss the vsync and draw say the top 20 pixel rows of the screen as the old frame) then wait for vsync, so every frame is really taking 2 vsyncs(1/60th of a second = 60FPS) when really it only needs to take 1/119th of a second (119 FPS).

If you just displayed the frame whenever you had it and kept going, you'd have a traveling tear that would be at a different place each screen update, but you'd also have 119 completely different images displaying each second instead of 60.

When you lock to vsync, you should make sure the frames are taking less time than a single vsync because otherwise you're cutting hte framerate by an integer divisor, the lowest of which (2) means you're losing a LOT of potential frames. If you lock to vsync normally, time each frame and when things start slowing down, stop locking to vsync.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this