Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualHodgman

Posted 06 April 2013 - 06:26 AM

If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync. If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).

If you're saying that every game should be able to run at 60Hz, that's not really true - for example, the vast majority of PS3/360 games run at 30Hz, because it basically allows there to be twice as much 'stuff'/detail on screen.
For many games, there is definitely a point where you just decide that 60Hz on your target hardware is not feasible / worth the sacrifices, and set your target as 30Hz.
Surely you've been in one of these situations in your career?

To use a "pick 2 out of 3" analogy, you've got quality, quantity and frame-time. Sometimes the content creators (not the game engine team) will want quality and quantity at 30Hz, rather than halving one to get 60Hz. In the real wold there's also dev-time, maybe you want higher quality, but want lower dev time more...
At a 60Hz refresh, they key frame-rates are 60, 30 and 20Hz, matching 1, 2 and 3 vblanks worth of time.
If we say it's normal to take between 1 to 2 vblanks to render a scene (i.e. a target of 30Hz), then it's possible for a very small addition by the content team to push the frame time out to being just over 2 vblanks, into the 2-3 window, which results in a 20Hz display. It's very easy to imagine a situation where this happens but no compromises to quality/quantity are allowed by management, which only leaves an increase in development time for the engine team to find some magic optimization to make this new content possible at 30Hz. It's also easy to imagine a situation where there is simply no time/money spare for this engine task, so the game ships with some scenes that drop to 20Hz... That's an understandable thing that happens, and sometimes it might be the right choice.

On PC, we've also got to deal with variable hardware. Imagine:
• on our lowest settings on a target PC, our frame times are 10-15ms depending on the scene, resulting in a smooth 60Hz.
• on a low-end PC, frame times are 20-30ms, resulting in a smooth 30Hz.
• on a slightly better than low-end PC, frame times are 15-25ms, resulting in either 30 or 60Hz, depending on the scene.
There's a whole, near continuous, spectrum of hardware out there with different performance characteristics (e.g. on one GPU, your code might be bottlenecked by ALU, but on another it's bottlenecked by bandwidth - often you can't optimize fully for both), and different refresh rates, so there'll be someone who's frame times will be sitting dangerously close enough to a vblank interval to be osscillating across that boundary as the scene changes. The only way to avoid that is to force a fixed frame time ("30Hz for everybody!!!") or not use vsync. To be polite, your game should give the user the option whether to use vsync or not...

#3Hodgman

Posted 06 April 2013 - 06:22 AM

If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync. If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).

If you're saying that every game should be able to run at 60Hz, that's not really true - for example, the vast majority of PS3/360 games run at 30Hz, because it basically allows there to be twice as much 'stuff'/detail on screen.
For many games, there is definitely a point where you just decide that 60Hz on your target hardware is not feasible / worth the sacrifices, and set your target as 30Hz.
Surely you've been in one of these situations in your career?

To use a "pick 2 out of 3" analogy, you've got quality, quantity and frame-time. Sometimes the content creators (not the game engine team) will want quality and quantity at 30Hz, rather than halving one to get 60Hz. In the real wold there's also dev-time, maybe you want higher quality, but want lower dev time more...
At a 60Hz refresh, they key frame-rates are 60, 30 and 20Hz, matching 1, 2 and 3 vblanks worth of time.
If we say it's normal to take between 1 to 2 vblanks to render a scene (i.e. a target of 30Hz), then it's possible for a very small addition by the content team to push the frame time out to being just over 2 vblanks, into the 2-3 window, which results in a 20Hz display. It's very easy to imagine a situation where this happens but no compromises to quality/quantity are allowed by management, which only leaves an increase in development time for the engine team to find some magic optimization to make this new content possible at 30Hz. It's also easy to imagine a situation where there is simply no time/money spare for this engine task, so the game ships with some scenes that drop to 20Hz... That's an understandable thing that happens, and sometimes it might be the right choice.

On PC, we've also got to deal with variable hardware. Imagine:
• on our lowest settings on a target PC, our frame times are 10-15ms depending on the scene, resulting in a smooth 60Hz.
• on a low-end PC, frame times are 20-30ms, resulting in a smooth 30Hz.
• on a slightly better than low-end PC, frame times are 15-25ms, resulting in either 30 or 60Hz, depending on the scene.
There's a whole, near continuous, spectrum of hardware out there with different performance characteristics (e.g. on one GPU, your code might be bottlenecked by ALU, but on another it's bottlenecked by bandwidth - often you can't optimize fully for both), and different refresh rates, so there'll be someone who's frame times will be sitting dangerously close enough to a vblank interval to be osscillating across that boundary as the scene changes. The only way to avoid that is to have a fixed frame time...

#2Hodgman

Posted 06 April 2013 - 06:17 AM

If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync. If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).

If you're saying that every game should be able to run at 60Hz, that's not really true - for example, the vast majority of PS3/360 games run at 30Hz, because it basically allows there to be twice as much 'stuff'/detail on screen.
For many games, there is definitely a point where you just decide that 60Hz on your target hardware is not feasible / worth the sacrifices, and set your target as 30Hz.

To use a "pick 2 out of 3" analogy, you've got quality, quantity and frame-time. Sometimes the content creators (not the game engine team) will want quality and quantity at 30Hz, rather than halving one to get 60Hz. In the real wold there's also dev-time, maybe you want higher quality, but want lower dev time more...
At a 60Hz refresh, they key frame-rates are 60, 30 and 20Hz, matching 1, 2 and 3 vblanks worth of time.
If we say it's normal to take between 1 to 2 vblanks to render a scene (i.e. a target of 30Hz), then it's possible for a very small addition by the content team to push the frame time out to being just over 2 vblanks, into the 2-3 window, which results in a 20Hz display. It's very easy to imagine a situation where this happens but no compromises to quality/quantity are allowed by management, which only leaves an increase in development time for the engine team to find some magic optimization to make this new content possible at 30Hz. It's also easy to imagine a situation where there is simply no time/money spare for this engine task, so the game ships with some scenes that drop to 20Hz... That's an understandable thing that happens, and sometimes it might be the right choice.

On PC, we've also got to deal with variable hardware. Imagine:
• on our lowest settings on a target PC, our frame times are 10-15ms depending on the scene, resulting in a smooth 60Hz.
• on a low-end PC, frame times are 20-30ms, resulting in a smooth 30Hz.
• on a slightly better than low-end PC, frame times are 15-25ms, resulting in either 30 or 60Hz, depending on the scene.
There's a whole, near continuous, spectrum of hardware out there with different performance characteristics (e.g. on one GPU, your code might be bottlenecked by ALU, but on another it's bottlenecked by bandwidth - often you can't optimize fully for both), and different refresh rates, so there'll be someone who's frame times will be sitting dangerously close enough to a vblank interval to be osscillating across that boundary as the scene changes. The only way to avoid that is to have a fixed frame time...

#1Hodgman

Posted 06 April 2013 - 06:11 AM

If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync. If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).

If you're saying that every game should be able to run at 60Hz, that's not really true - for example, the vast majority of PS3/360 games run at 30Hz, because it basically allows there to be twice as much 'stuff'/detail on screen.
For many games, there is definitely a point where you just decide that 60Hz on your target hardware is not feasible / worth the sacrifices, and set your target as 30Hz.

To use a "pick 2 out of 3" analogy, you've got quality, quantity and frame-time. Sometimes the content creators (not the game engine team) will want quality and quantity at 30Hz, rather than halving one to get 60Hz. In the real wold there's also dev-time, maybe you want higher quality, but want lower dev time more...
At a 60Hz refresh, they key frame-rates are 60, 30, 20 and 15Hz, matching 1, 2, 3 and 4 vblanks worth of time.
If we say it's normal to take between 1 to 2 vblanks to render a scene (I.e. a target of 30Hz), then it's possible for a very small addition by he content team to push the frame time out to being just over 2 vblanks, into the 2-3 window, which results in a 20Hz display. It's very easy to imagine a situation where his happens but no compromises to quality/quantity are allowed, which only leaves an increase in development time for the engine team to find some magic optimization to make this new content possible at 30Hz. It's also easy to imagine a situation where there is simply no time/money spare for his task, so the game ships with some scenes that drop to 20Hz...

On PC, we've also got to deal with variable hardware. Imagine:
• on our lowest settings on a target PC, our frame times are 10-15ms depending on the scene, resulting in a smooth 60Hz.
• on a low-end PC, frame times are 20-30ms, resulting in a smooth 30Hz.
• on a slightly better than low-end PC, frame times are 15-25ms, resulting in either 30 or 60Hz, depending on the scene.
There's a whole, near continuous, spectrum of hardware out there with different performance characteristics (e.g. on one GPU, your code might be bottlenecked by ALU, but on another it's bottlenecked by bandwidth - often you can't optimize fully for both), and different refresh rates, so there'll be someone who's performance frame times will be sitting dangerously close to a vblank interval, even osscillating across that boundary as the scene changes. The only way to avoid that is to have a fixed frame time...

PARTNERS