Jump to content

  • Log In with Google      Sign In   
  • Create Account


Expert question: How to get a better time delta?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
30 replies to this topic

#21 Olof Hedman   Crossbones+   -  Reputation: 2216

Like
0Likes
Like

Posted 03 April 2013 - 04:23 AM

human brains appear to be quite good at picking up on subtle things like this, especially in graphics, since we're visually oriented creatures.

 

And with practice, you get even better at it.

I think its also important to remember that we who stare at pixels all day get pretty darn good at it, a lot more so then the average person.

Also, it bugs us a lot more, because we know how smooth it _can_ look smile.png

 

Same thing with color depths, that whole "True color, the human eye can not see more then 24 bit color" is just marketing, the RGB space cover only a part of our color vision, and someone trained in looking at computer graphics and  finding faults easily notice all the banding and dithering going on to try to compensate.

 

Not saying you shouldn't try to make it as smooth as possible, of course that is a priority (and usually your job as a low level graphics guy) smile.png

 

Really interesting thread!

Getting your delta time right is more work then one would think


Edited by Olof Hedman, 03 April 2013 - 04:24 AM.


Sponsor:

#22 Icebone1000   Members   -  Reputation: 792

Like
0Likes
Like

Posted 03 April 2013 - 09:09 AM

Can someone give some input about the difference in handling gameloop in PC and in consoles?

 

ppl always say that pc is too generic and cant guarantee a thing, so, how that works in a console?



#23 Hodgman   Moderators   -  Reputation: 24022

Like
1Likes
Like

Posted 03 April 2013 - 03:34 PM

Can someone give some input about the difference in handling gameloop in PC and in consoles?

ppl always say that pc is too generic and cant guarantee a thing, so, how that works in a console?

There's no real difference in your game loop these days, but the level of abstraction between you and the hardware is thinner -- there's a fixed hardware spec (with some exceptions, like storage size) and the OS is simpler.
On older consoles, there isn't even really an OS at all, just the game and the hardware, so all the "device driver" code is in the game. On a console, you can do dangerous things that you really don't want to be available to general purpose PC applications, like the ability to take mutually exclusive control over a device, or generate hardware interrupts which call their own kernel-mode functions, communicate directly to devices via MMIO without going through a driver, implement CPU/GPU/refresh-rate synchronization yourself from scratch...

With such a simple machine you can be sure that no background process is going to steal any of your CPU time.
On windows (with default settings), if a thread sleeps (which might happen inside Present), then the thread might not wake for 15ms+, and in a worst case, windows can leave a thread sleeping for over 5 seconds if you've got enough processes running ;/

#24 phantom   Moderators   -  Reputation: 5715

Like
1Likes
Like

Posted 04 April 2013 - 04:38 AM

But even at these extremes, it was still hard to tell what was wrong -- it just didn't feel like a 60Hz game (although Fraps will tell you it's rendering at 60Hz). It's only when I record a video of the game at 60Hz and then step through the video frame by frame that it's obvious that every 3rd frame is a duplicate of the one before it.

For various reasons I wouldn't really trust fraps to be telling you the whole truth of what is going on anyway; http://anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps

#25 Frank Force   Members   -  Reputation: 198

Like
0Likes
Like

Posted 04 April 2013 - 11:23 AM

Interesting article, completely wrong about some of the conclusions they make though...

 

" If the GPU takes longer to render a frame than expected – keeping in mind it’s impossible to accurately predict rendering times ahead of time – then that would result in stuttering."

 

First of all that's what triple buffering is for.  You can have a long frame and even miss the vsync but it will be ok as long as the buffer is full.  Secondly you absolutely can predict the rendering times ahead of time with almost perfect accurately.  True that it is impossible to know the exact phase shift of the vsync but that doesn't matter.  Also, obviously if you fall too far behind or the OS decides to give something else priority there can be stutter but in my experience with my game engine its not unrealistic to expect your game to be perfectly smooth without any dropped frames or stutter on a decent PC.

 

"Variance will always exist and so some degree of stuttering will always be present. The only point we can really make is the same point AMD made to us, which is that stuttering is only going to matter when it impacts the user."

 

Actually no, we can completely eliminate the variance and it's not even hard to do.  It is very surprising that they don't mention anything like my idea because it seems like it would fix most of the issues they talk about in the article.



#26 Hodgman   Moderators   -  Reputation: 24022

Like
0Likes
Like

Posted 04 April 2013 - 08:51 PM

That fraps article was interesting reading, but using fraps to capture all the images that come out of the app is still valid, which lets you spot some stuttering issues (like my duplicate frames problem).
At my last job, we used external HDMI capture and high speed video of a CRT to help diagnose other timing issues as well.

With regards to predicting future timing, you can't *in general*.
If you're vsync'ing, and you're always under your frame-time budget, then you can... But if you start missing vsync intervals, your timing won't be perfect around those points in time. E.g you might have advanced the simulation 16.6ms, but displayed that image 33.3ms later, so the animation will be 16.6ms back in time from where it should be.
Also, whether your CPU or GPU (or both) is over it's frame time budget makes a big difference as to whether stuttering can be absorbed by buffering.

On the last console game that I shipped, we sync'ed to a 30Hz refresh, but in some scenes it was possible for us to miss that window. If a frame was late, we'd temporarily disable vsync (and get tearing at ~30-20Hz) while also reducing the internal rendering resolution in the hopes of getting GPU frame time back down to 33ms... So sometimes it would be possible for us to get perfect timing, but other times we had all the issues of a variable frame rate game :(

#27 Frank Force   Members   -  Reputation: 198

Like
0Likes
Like

Posted 04 April 2013 - 11:47 PM

You don't have to always be under frame-time budget to not get stutter.  If you are triple buffering you can have a long frame that is over your frame time budget as long as your next couple frames are fast enough to keep up and eventually put it far enough ahead to buffer out the next skip.  In my experience the OS will not normally eat up that much time especially when running in full screen mode.   The sacrifice is latency, that's what the triple buffering smooths out.  It is running 1 frame ahead (hopefully) so that when you lose a frame it won't stutter, it just gets less latent and when it drops that frame it will need to work faster to catch up.

 

Ok, I thought of a different way to explain the issue...

 

Lets say I'm writing a story at the same time you are reading it.  You read at exactly 60 pages per hour, in fact you read so precisely that every minute on the dot you finish the old page and start reading a new one.  I can write much faster then 60 pages per hour, maybe even twice as fast but sometimes I need to slow down to think and write much slower.   Problem is we only have 3 pieces of paper to share between us and I can't write on the same page you are reading.  To make matters worse we are in different rooms and can't see each other.  We have a helper who brings my new papers to you and your old papers back to me, but there is a bit of variance when we actually get that paper from the other person.   I can magically erase papers instantly when I get them.  Also we don't have synchronized clocks or any other way to communicate.

 

The question is can I keep feeding you papers without you every stalling out with nothing new to read?  The answer is pretty obviously yes I can as long as I don't fall too far behind because we have an extra paper to use as buffer.  As long as I keep that extra paper/buffer full I can take almost twice as long to write one page and we won't even miss a beat because you will just read the buffer while waiting.  After that happens I need to work hard to get that buffer full again, it might take a few pages before I am back on track but the important thing is you didn't have to stop reading.

 

We can also work the time delta thing into this.  Lets say we want every paper to have a time stamp on it.  We want those time stamps to be exactly 1 minute apart from each other because maybe in the story I'm writing each page accounts for exactly 1 minute of time in the fictional world I'm writing about.  Well I don't write pages at exactly 1 minute per page but I can just start with time index 0 and increment it by 1 minute for each page even if it takes me more/less then 1 minute to write it.  Basically I know that I need to write each page to represent 1 minute of fictional time regardless of how long it takes to write.  As far as you know the time stamps are correct and exactly 1 minute apart regardless of our relative times. Since you are reading at exactly the same rate as the time stamps then to you it will appear that each time stamp matches up with your own time.  The story you are reading represents 1 minute per page in the fictional world and it takes you exactly 1 minute to read each page. I'm writing you a story for you in real time and passing it back and forth without any stutter and only 3 pages to share.  It seems so simple when you think about it like that.  This is really no different then simulating physics on a computer and rendering with that specific interval rather then what time was measured between updates. 

 

With double buffering the situation gets a little worse.  We would only have 2 pieces of paper which means I always need to write faster then you can read plus enough to cover any variance we have when passing papers back and forth.  With a single buffer I'm basically writing to the same page you are reading, erasing the lines as I go.

 

Anyway I hope that helps someone understand a bit better what is going on. Or if I'm not making any sense, please let me know!


Edited by Frank Force, 05 April 2013 - 01:49 AM.


#28 Hodgman   Moderators   -  Reputation: 24022

Like
0Likes
Like

Posted 05 April 2013 - 05:47 AM

Yeah, but just change 'missing a vblank to' missing 2 in a row, or 3 in a row, etc and the same issue presents itself. At the transition from one frame rate to another, the correct timing to advance the simulation by is unpredictable.

Situations where a single-frame is over budget can be fixed usually as they're usually caused by some single expensive operation that's obvious to spot. In other situations where you miss your frame time budget, it's usually because there's simply 'too much stuff' in the sim / visible scene at the moment, and you'll keep missing the budget until the scene changes.

#29 Frank Force   Members   -  Reputation: 198

Like
0Likes
Like

Posted 05 April 2013 - 01:05 PM

If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync.  If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).  Also you can always add more buffers which will increase latency but give you another frame of buffer time.  Obviously that is undesirable but if we think about trying to run at 120 hz it might make sense to have 4 or more buffers since each frame is only half as long.

 

Stutter really has nothing to do with the OS or variance in timing or any of that.  Developers blame it on that because they don't understand how things actually work.  As a developer you have it within your power to completely eliminate the stutter.  If your game is running slow then it needs to be optimized.  If your game runs fast but still stutters it's your own fault.


Edited by Frank Force, 05 April 2013 - 01:06 PM.


#30 Hodgman   Moderators   -  Reputation: 24022

Like
0Likes
Like

Posted 06 April 2013 - 06:11 AM

If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync. If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).

If you're saying that every game should be able to run at 60Hz, that's not really true - for example, the vast majority of PS3/360 games run at 30Hz, because it basically allows there to be twice as much 'stuff'/detail on screen.
For many games, there is definitely a point where you just decide that 60Hz on your target hardware is not feasible / worth the sacrifices, and set your target as 30Hz.
Surely you've been in one of these situations in your career?

To use a "pick 2 out of 3" analogy, you've got quality, quantity and frame-time. Sometimes the content creators (not the game engine team) will want quality and quantity at 30Hz, rather than halving one to get 60Hz. In the real wold there's also dev-time, maybe you want higher quality, but want lower dev time more...
At a 60Hz refresh, they key frame-rates are 60, 30 and 20Hz, matching 1, 2 and 3 vblanks worth of time.
If we say it's normal to take between 1 to 2 vblanks to render a scene (i.e. a target of 30Hz), then it's possible for a very small addition by the content team to push the frame time out to being just over 2 vblanks, into the 2-3 window, which results in a 20Hz display. It's very easy to imagine a situation where this happens but no compromises to quality/quantity are allowed by management, which only leaves an increase in development time for the engine team to find some magic optimization to make this new content possible at 30Hz. It's also easy to imagine a situation where there is simply no time/money spare for this engine task, so the game ships with some scenes that drop to 20Hz... That's an understandable thing that happens, and sometimes it might be the right choice.

On PC, we've also got to deal with variable hardware. Imagine:
• on our lowest settings on a target PC, our frame times are 10-15ms depending on the scene, resulting in a smooth 60Hz.
• on a low-end PC, frame times are 20-30ms, resulting in a smooth 30Hz.
• on a slightly better than low-end PC, frame times are 15-25ms, resulting in either 30 or 60Hz, depending on the scene.
There's a whole, near continuous, spectrum of hardware out there with different performance characteristics (e.g. on one GPU, your code might be bottlenecked by ALU, but on another it's bottlenecked by bandwidth - often you can't optimize fully for both), and different refresh rates, so there'll be someone who's frame times will be sitting dangerously close enough to a vblank interval to be osscillating across that boundary as the scene changes. The only way to avoid that is to force a fixed frame time ("30Hz for everybody!!!") or not use vsync. To be polite, your game should give the user the option whether to use vsync or not...

#31 Frank Force   Members   -  Reputation: 198

Like
0Likes
Like

Posted 06 April 2013 - 08:19 AM

I am not saying that all code should be perfect and every game should run perfectly at 60 fps in every situation!  What I'm trying to say is that as a developer if you want to make a game that runs at 60 fps without stuttering it is possible.  The thing is that people say that stutter is unavoidable regardless of how good your system and graphics card is.  I say it is very avoidable, and actually fairly easy to eliminate for high end systems.  If I'm running your game on an awesome computer that is well above spec it shouldn't stutter.  Yet most games do.  That is all I am saying. 

 

Most console games run at 30 even though HDTV is 60.  That is fine, basically they have decided to ignore every other vsync and pretend their vsync is 30 which changes nothing.  The point is they chose to run at 30 and if they are stay at a solid 30 it should look pretty smooth, any stutter is only introduced by their poor code/planning.  If it has situations where it drops to 20 fps and there's no time to fix it that does suck and maybe it is the right decision to just move on and fix higher priority issues.  But the fact is the reason it dropped to 20 is because of all the bad decisions that led up to that whether they were programming, design or art decisions.  It's certainly not the console's fault if you get stuttering or fps drops when you are in such a predictable environment.

 

When you are making a PC game, of course people can run it with low end PCs and it might run really shitty.  That is not what I am talking about at all.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS