Jump to content
  • Advertisement

Frank Force

  • Content Count

  • Joined

  • Last visited

Community Reputation

198 Neutral

About Frank Force

  • Rank
  1. I am not saying that all code should be perfect and every game should run perfectly at 60 fps in every situation!  What I'm trying to say is that as a developer if you want to make a game that runs at 60 fps without stuttering it is possible.  The thing is that people say that stutter is unavoidable regardless of how good your system and graphics card is.  I say it is very avoidable, and actually fairly easy to eliminate for high end systems.  If I'm running your game on an awesome computer that is well above spec it shouldn't stutter.  Yet most games do.  That is all I am saying.    Most console games run at 30 even though HDTV is 60.  That is fine, basically they have decided to ignore every other vsync and pretend their vsync is 30 which changes nothing.  The point is they chose to run at 30 and if they are stay at a solid 30 it should look pretty smooth, any stutter is only introduced by their poor code/planning.  If it has situations where it drops to 20 fps and there's no time to fix it that does suck and maybe it is the right decision to just move on and fix higher priority issues.  But the fact is the reason it dropped to 20 is because of all the bad decisions that led up to that whether they were programming, design or art decisions.  It's certainly not the console's fault if you get stuttering or fps drops when you are in such a predictable environment.   When you are making a PC game, of course people can run it with low end PCs and it might run really shitty.  That is not what I am talking about at all.
  2. If you write a decent game engine there is no reason why you should EVER have a frame that takes more then twice as long as your vsync.  If that is the case you need to either lower your worst case scenario (reduce geometry, particle counts, or whatever is killing it) or smooth out the computation so it's distributed over many frames (async collision tests, staggered ai).  Also you can always add more buffers which will increase latency but give you another frame of buffer time.  Obviously that is undesirable but if we think about trying to run at 120 hz it might make sense to have 4 or more buffers since each frame is only half as long.   Stutter really has nothing to do with the OS or variance in timing or any of that.  Developers blame it on that because they don't understand how things actually work.  As a developer you have it within your power to completely eliminate the stutter.  If your game is running slow then it needs to be optimized.  If your game runs fast but still stutters it's your own fault.
  3. You don't have to always be under frame-time budget to not get stutter.  If you are triple buffering you can have a long frame that is over your frame time budget as long as your next couple frames are fast enough to keep up and eventually put it far enough ahead to buffer out the next skip.  In my experience the OS will not normally eat up that much time especially when running in full screen mode.   The sacrifice is latency, that's what the triple buffering smooths out.  It is running 1 frame ahead (hopefully) so that when you lose a frame it won't stutter, it just gets less latent and when it drops that frame it will need to work faster to catch up.   Ok, I thought of a different way to explain the issue...   Lets say I'm writing a story at the same time you are reading it.  You read at exactly 60 pages per hour, in fact you read so precisely that every minute on the dot you finish the old page and start reading a new one.  I can write much faster then 60 pages per hour, maybe even twice as fast but sometimes I need to slow down to think and write much slower.   Problem is we only have 3 pieces of paper to share between us and I can't write on the same page you are reading.  To make matters worse we are in different rooms and can't see each other.  We have a helper who brings my new papers to you and your old papers back to me, but there is a bit of variance when we actually get that paper from the other person.   I can magically erase papers instantly when I get them.  Also we don't have synchronized clocks or any other way to communicate.   The question is can I keep feeding you papers without you every stalling out with nothing new to read?  The answer is pretty obviously yes I can as long as I don't fall too far behind because we have an extra paper to use as buffer.  As long as I keep that extra paper/buffer full I can take almost twice as long to write one page and we won't even miss a beat because you will just read the buffer while waiting.  After that happens I need to work hard to get that buffer full again, it might take a few pages before I am back on track but the important thing is you didn't have to stop reading.   We can also work the time delta thing into this.  Lets say we want every paper to have a time stamp on it.  We want those time stamps to be exactly 1 minute apart from each other because maybe in the story I'm writing each page accounts for exactly 1 minute of time in the fictional world I'm writing about.  Well I don't write pages at exactly 1 minute per page but I can just start with time index 0 and increment it by 1 minute for each page even if it takes me more/less then 1 minute to write it.  Basically I know that I need to write each page to represent 1 minute of fictional time regardless of how long it takes to write.  As far as you know the time stamps are correct and exactly 1 minute apart regardless of our relative times. Since you are reading at exactly the same rate as the time stamps then to you it will appear that each time stamp matches up with your own time.  The story you are reading represents 1 minute per page in the fictional world and it takes you exactly 1 minute to read each page. I'm writing you a story for you in real time and passing it back and forth without any stutter and only 3 pages to share.  It seems so simple when you think about it like that.  This is really no different then simulating physics on a computer and rendering with that specific interval rather then what time was measured between updates.    With double buffering the situation gets a little worse.  We would only have 2 pieces of paper which means I always need to write faster then you can read plus enough to cover any variance we have when passing papers back and forth.  With a single buffer I'm basically writing to the same page you are reading, erasing the lines as I go.   Anyway I hope that helps someone understand a bit better what is going on. Or if I'm not making any sense, please let me know!
  4. Interesting article, completely wrong about some of the conclusions they make though...   " If the GPU takes longer to render a frame than expected – keeping in mind it’s impossible to accurately predict rendering times ahead of time – then that would result in stuttering."   First of all that's what triple buffering is for.  You can have a long frame and even miss the vsync but it will be ok as long as the buffer is full.  Secondly you absolutely can predict the rendering times ahead of time with almost perfect accurately.  True that it is impossible to know the exact phase shift of the vsync but that doesn't matter.  Also, obviously if you fall too far behind or the OS decides to give something else priority there can be stutter but in my experience with my game engine its not unrealistic to expect your game to be perfectly smooth without any dropped frames or stutter on a decent PC.   "Variance will always exist and so some degree of stuttering will always be present. The only point we can really make is the same point AMD made to us, which is that stuttering is only going to matter when it impacts the user."   Actually no, we can completely eliminate the variance and it's not even hard to do.  It is very surprising that they don't mention anything like my idea because it seems like it would fix most of the issues they talk about in the article.
  5. Hodgeman - I'm not surprised your game already felt smooth because using a fixed time step that is equal to the vsync is the ideal case to not have jitters.  Its interesting that tried that method to force it to produce duplicate frames and still couldn't really detect it visually.  It does depend on the type of game, for some games it is less obvious.  The most obvious case is a 2d scrolling game where pops like this are very apparent.  In 3d fps type games it seems less obvious.  I also think that many of even the very best games have this jittery issue so we are trained to not notice it.   I think the whole point of triple buffering is to allow the CPU to fall behind a bit to smooth out pops by sacrificing latency.  So even if you have just 1 new backbuffer to display and the vsync happens then it will flip it, why wouldn't it?  The next vsync you will just need to do 2 frames to catch up.  This is how pops are smoothed out even if you have a very long frame.  Triple buffering + vsync is a way of actually decoupling the vsync from the game update.  Even though you get 1 update per vsync on average, it's really more about keeping the back buffers updated then waiting for the vsync.   Icebone - Wow, that's a really cool idea!  Keep in mind though that due to rounding that gaps in the pattern may actually be correct.  So for example if it's moving at 1.1 pixels per vsync, you would expect it to jump an extra pixel every 10 vsyncs. The math to make things move an integer number of pixels when there's a fixed time step with interpolations is a bit complicated.  What I do to visualize jitter is have a wrapping image scroll continuously across the screen.  When running at 60 fps it should move extremely smooth.  I will need to think more about your idea though, I like how it is visualized over time.
  6. Hodgeman - Very interesting results!  Did you notice an actual visual pop when it did the 2-0-2 thing?  If you are triple buffering (and possibly double buffering i think) it is normal to get 2 updates or 0 in one frame,
  7. Hodgeman - Mine seems to fluctuate a bit more, but that seems about right.  I think I know why it averages higher then 1/60.  Any frames that get skipped will raise the average above the refresh rate.  Are you doing any kind of interpolation?  The correction buffer thing will ensure the delta is greater than 1 / refreshRate.  You don't want to cap it the minimum delta directly because even though the measured delta can be less then 1/60 it still needs to be accounted for.   Cornstalks - With a fixed time step you need interpolation to correct for the difference between your monitor's refresh rate and the fixed time step, that is a different issue.  This method should work fine regardless of the refresh rate of the monitor.
  8. Hodgman -    My deltas don't vary much, generally between 16 and 17 or so but can be much less/greater. Also when triple buffering multiple updates must sometimes happen during the same vsync interval, otherwise you aren't really buffering anything, right? My engine uses a double for the timer, but that shouldn't affect this issue.  Changing to a fixed point solution or increasing the timer accuracy won't get rid of the fluctuation. I am curious about why your measured deltas don't vary, have you tried logging them to a file?   ApochPiQ -    Thanks that helps a little.  I like to think of it as buffering the time delta.  I figured this was a common thing that I must have never came across or seen mentioned anywhere because it seems pretty damn important.  That's why I'm asking if I am over thinking things or maybe just using the wrong terminology.
  9. You say it's not possible to get perfect timing, but you also say that doesn't really need that to work correctly.  I agree that a game engine will still work mostly correct without the stuff I'm talking about, every game engine I've ever seen has certainly worked fine without it.  But I don't understand why people are willing to settle for mostly correct, if we could get perfect timing that would be better right?  I mean, it's not theoretical or magical, monitors have a very specific vsync interval that images are displayed at.   The whole point of what I'm trying to talk about here is that you can get perfect timing that is 100% precise and this is how you do it. The only sacrifice is a small amount of unavoidable latency which is a small price to pay for perfect timing and smoothness of motion.  With triple buffering it doesn't really matter what the os does or how much fluctuation between deltas there is. What about my plan makes you think it won't yield perfect timing?
  10. How do you measure the actual wall clock time that elapses?  I think that's what I'm already doing that I found was wrong. So, there are two different deltas here, the wall clock delta and the fixed time step delta.  The problem I'm talking about is that the measured wall clock is not ever an exact even multiple of the vsync interval even though it should be in order for the rendered frames to be interpolated properly.  Can you see how that would be an issue?
  11. Hi,    I'm working on a game engine and something occurred to me about how accurately we are measuring the delta between frames.  I normally use a timer to check the delta, but even when running vsynced the measured delta is never exactly equal to the vsync interval.  On average it's equal to the vsync interval but the measured delta time can fluctuate quite a bit.  This means that if you have a fancy fixed time step with interpolation it's still going to be wrong because the delta that is being represented between frames will not ever be equal to the vsync interval, even though the frames themselves are always shown at exactly the refresh rate.  This causes a tiny bit if jitter that maybe most people don't notice but I do, and it was really starting to bug me. Clearly something must be done to correct the discrepancy, right?   Please understand that using a fixed time step with interpolation is not going to fix this issue!  What interpolation fixes is temporal aliasing, this is more like temporal fluctuation. The solution I worked out corrects the time delta in advance so it will always be in phase with the vsync interval. My game engine runs incredibly smooth with this enabled so I already know that it works.   My question is, am I crazy or is this kind of important for every game engine to have?  Is there some other more simple method I am unaware of that people use to deal with this?  I tried posting in a few other places but no one seems interested or maybe they just don't understand what I'm talking about.  One guy was very insulting and basically called me a noob... I've been working in the game industry at major studios for over a decade. If just one person can understand what I'm talking about and/or explain why I'm wrong that would be totally awesome.  Here's a link to my blog post with more info and code...   http://frankforce.com/?p=2636  
  12. Frank Force

    3D Bonsai Tree Game

    Thanks. Part of my motivation was to make a learning tool that can supplement traditional Bonsai studies or something a teacher can bring out when kids are studying tree growth in science class. But my core demographic is the same kind of people that would by Nintendogs or Sim City. I hope that eventually it will be more appealing to the mass market when there is more of a game built around it with tutorials and stuff. - Frank Force
  13. Frank Force

    3D Bonsai Tree Game

    GoBonsai is a prototype of interactive bonsai tree software. Eventually I would like to build a full game around it, but for now I’m releasing what I have which is a pretty nice demonstration of the engine with one tree type and a simple interface. Many features are planned for the future like ability to save and load trees, more tools, wiring, root trimming, pot selection, scenery, higher quality rendering, etc. Beta testers are needed. The website is www.bonsaigame.com - Frank Force
  14. Frank Force

    Best way to load a webpage from C++

    Was using Firefox as default browser. Just tested with Chrome. With chrome I don't get any firewall popup with my code. I think is is Firefox that has a feature of kicking it over to zonealarm if another process tries to open a webpage. So basically I think it's just Firefox that is being glitchy.
  15. I am working on a windows screensaver in a multi-threaded environment. Trying to load a webpage when someone clicks on a button. Was using the code... ShellExecute(NULL, L"open", L"http://www.frankforce.com", NULL, NULL, SW_SHOWNORMAL); Worked ok except there was a bug. I use zonealarm so when the firewall popped up about this program trying to access the internet I decided to try blocking it's access. This caused the thread to never exit and the program to stay running in background even though you could still set the settings and click the ok button. Eventually multiple copies would be running in background until manually shut down. Checked in debugger and saw that the ShellExecute function never exits in this case. Read up on stuff. Now using new improved call to open webpage. SHELLEXECUTEINFOW shellInfo; ZeroMemory (&shellInfo, sizeof (SHELLEXECUTEINFO)); shellInfo.cbSize = sizeof(shellInfo); shellInfo.fMask = SEE_MASK_ASYNCOK; shellInfo.lpVerb = L"open"; shellInfo.lpFile = L"http://www.frankforce.com"; shellInfo.nShow = SW_SHOWNORMAL; ShellExecuteEx(&shellInfo); The key difference is the SEE_MASK_ASYNCOK which kicks control right back after the call. Seems to work well now. But I noticed a glitch maybe a security hole in zone alarm. If i disallow my program's access and then just rapidly click on the button that tries to load the webpage eventually it will load the webpage even though zonalarm should prevent it. What's going on here? - Frank Force
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!