Sign in to follow this  
TheStudent111

How to control FPS

Recommended Posts

I'm confused at how developers have any control over the FPS on a particular game. What does it mean when a developer aims for their game to run at 60 fps (As far as their code is concerned)? Isn't the FPS dependent on the hardware, especially the GPU. Since there are different PC configurations out there with different GPU speeds, how does a developer get their game at a specific frames per second.

Share this post


Link to post
Share on other sites

I'm confused at how developers have any control over the FPS on a particular game. What does it mean when a developer aims for their game to run at 60 fps (As far as their code is concerned)? Isn't the FPS dependent on the hardware, especially the GPU. Since there are different PC configurations out there with different GPU speeds, how does a developer get their game at a specific frames per second.

It's dependent on how much work you are doing. So obviously you can increase the fps by either speeding up the hardware, or doing less work per unit of time. When games are initially written they usually perform awful because it is most efficient to go back and rewrite key parts of code once you can profile it to figure out where the most inefficient spots are.

Share this post


Link to post
Share on other sites

A target framerate of 60 frames per second is common in console games where the TV has a refresh rate of about 60 FPS. (Actually 59.9-something.)

60 frames per second, or about 16.6 milliseconds per frame, is a good goal but mostly arbitrary on modern computers.

Many PC monitors and gaming monitors have framerates of 75, 80, 120, even 150 frames per second. Some newer video technologies have a system of variable rates with a minimum number of milliseconds, so if a game takes 19 ms or 23 ms or some other time above the minimum, the display will wait to update until that time. So the 60 FPS rate is historical and doesn't precisely fit, even though many cheap monitors frequently hit 60 frames per second for refresh.


Now how can games control it? They cannot exactly control it, but we can do a lot to get it working well. First, there is the quality of machine. PC's are not running a realtime operating system (meaning we PC does not allow precise scheduling) and are at the mercy of whatever other programs are running. If the person is running a CPU-intensive and memory-intensive task at the same time as the game, there isn't anything the game can do about it. So game developers provide a "recommended" configuration that should work.

The developers can monitor how much work the game is doing at any time, using tools called profilers. They can identify what is running slow and find ways to address it. If a search algorithm is taking to long, it can have the work spread out over time. If an AI routine is too complex, it can be simplified or cut off when the results are taking to long. If processing a rendering batch is taking too long, the batch sizes can be reduced. The profilers help identify what is going on and provide accurate timings, and the team makes decisions based on that.

When the issues are resolved the game should be running at a very consistent rate on the recommended hardware. Maybe consistent at 12 milliseconds per frame, which is plenty to hit the 16.6 ms needed for 60 frames per second. Games will monitor their performance as they are built, often starting out at <1 millisecond per frame. Some beginners freak out when they see they have a framerate of 4000 frames per second suddenly drop to 2000 frames per second, but when they understand they went from 0.25 ms up to 0.5 ms it feels less frightening.

Personally we aim for about 8-10 milliseconds per frame, but every game and every team is different. Some games I've worked on had a target of 20 ms per frame so we could reliably hit 30 frames per second, and some programs were turn-based slow games with no soft targets at all, just render it once and be done.

Share this post


Link to post
Share on other sites

So in terms of hitting a specific fps target, it all boils down to optimizing your code in terms of  AI, Collision Detection, and or render code?

 

So in terms of PC games, do developers pick their target hardware specification based on the hardware they developed the game on?

Share this post


Link to post
Share on other sites
Yes, to both.

The developers look at the code, all the code, and look at what is slowing it down. If AI is slowing it down, that is adjusted. If rendering is slowing it down, that is adjusted. If something else is slowing it down, that is adjusted. Exactly the changes that are made, whether they may be called optimizing it or not, depend on the system. Sometimes it means swapping out algorithms with different algorithms, sometimes it means changing data structures, sometimes it means taking steps to better use the cache, or taking steps to use SIMD instructions, or something else entirely. But to hit the target, whatever systems are slow are adressed.

The target is often set by a mix of marketing, research, and whatever the developers have on hand. Sometimes the QA group is asked about which old machines still run the game at a good level and which run it at a bad-but-playable level, and set those as the minimum and recommended machines. Market research early on can tell if the target demographic is likely to have the machines. If you're building a casual game for grandma you cannot expect a high-end machine. If you're building an ultra-modern high-end game you can require a substantial machine.

Share this post


Link to post
Share on other sites

Why do some  developers aim for 30 frames per second? Whats the advantage of that? I would think a developer would want to aim for as high a framerate as possible, since if the target is at 30 frames per second then the frame rate would have to be lower than that (To the point were the perception of smooth motion is gone.). Is it due to the complexity of the game and the algorithms used will never allow the game to reach a certain frame-rate over 30? Or is there something else at play?

Edited by TheStudent111

Share this post


Link to post
Share on other sites

Why do some  developers aim for 30 frames per second? Whats the advantage of that? I would think a developer would want to aim for as high a framerate as possible, since if the target is at 30 frames per second then the frame rate would have to be lower than that (To the point were the perception of smooth motion is gone.). Is it due to the complexity of the game and the algorithms used will never allow the game to reach a certain frame-rate over 30? Or is there something else at play?

The simple reason is people tend to buy games(console games in particular) more swayed by the game play and the quality of the visuals than they do about the frame rate. At worst they complain about the fps dropping a lot, but still already bought the game and are likely to buy a sequel if they like the game as well. It's like arguing if you should make a car drive a little faster or have more lights on it, if the lights sell more copies then you'll always end up with the slower car.

 

Basically some marketing guy somewhere figured out they make more money aiming for 30 fps than 60 in many cases. Much to the dismay of PC gamers.

Share this post


Link to post
Share on other sites
Why do some  developers aim for 30 frames per second? Whats the advantage of that?

 

 

Somewhat as Satharis mentioned.

 

The specific game I was talking about was for the Nintendo DS.  That's a 66 MHz handheld. We were building a mixed 3D/2D game, which required a considerable amount of processing.  We realized early on that getting the framerate to constantly stick to 60 Hz, the refresh rate of the screen, was going to be nearly impossible.  Dropping frames appears as a stutter, so since we knew 16.6 ms was out for a performance target, we dropped it to 30 ms for a worst case target.  Since some frames take longer than others as they do more processing, as we were fine tuning it meant that most frames were around 20-25 milliseconds, and we never dropped frames.

 

That is commonly why developers target a lower framerate. It is generally better to have a consistent framerate than it is to drop frames.  Since 75 Hz is about the maximum on common displays, that means if you are well below 13 ms you should never miss a frame at 60 fps or at 75 fps.  But if you would occasionally jump above that line, especially if the person has other programs running, you can feel free to cut the framerate in half, drawing every other frame.  If you stay below 27 ms per frame you can display every other frame even on a 75 Hz monitor, and still have some wiggle room for a 60 Hz monitor rendering at 30 fps.

 

As hardware gets faster and more powerful, it is generally more easy to reach higher framerates. It still requires making smart choices, but it is easier to stay below 10 milliseconds when you have eight 4GHz processors and 25 MB of cache and a GPU speed measured in terraflops.  Far easier than to maintain it when on a 66 MHz processor with 4 MB total memory and a max of 2048 triangles every 60Hz frame.

Share this post


Link to post
Share on other sites

Basically what you need to do:

 

1. Check your graphics. They are the perfomance hog #1!

 

First thing to do is to minimize your draw calls. Draw calls are basically instructions sent to the GPU to render a new object on the screen. They are one very big bottleneck for multiple reasons. First, they have a CPU overhead. And given your draw calls get into the multiple thousands, those work done on the CPU will start to fill the CPU Cores, which are also needed for the scripts, AI and Physics logic running in your game. ESPECIALLY in engines that are not that multithreaded, where a main thread needs to do most of the work, AND this thread also prepares the draw calls to the GPU.

Second reason why draw calls are such a bottleneck is because mostly, draw calls are "Context switches" of the GPU. Everything gets "reset", as a new draw call might mean a new shader, new settings, and whatnot. This is taking time.

 

Now, there are many things you can do to minimize draw calls, and with newer APIs (DX12 and Vulkan), the API itself tries to batch draw calls so that there is less CPU overhead.

But most important is to keep your scene under control. Combining static objects, only render what is visible, and checking your shaders (because complex shaders can use multiple draw calls per object).

 

 

Then, you need to check your postprocessing effects. Some of them are REALLY expensive. While not adding that much to the scene, or being just as effective as a cheaper method. Which is why most PC games give you access to the settings and letting your turn postprocessing effects on or off, and why postprocessing for Console games are CAREFULLY selected and tuned by the game devs. On the last generation console, the reason for AA completly missing was just the weak performance of both the XBox 360 and PS3, being quite low powered compared to PCs after just 2-3 years of their long cycle. Antialiasing tends to be REALLY expensive.

 

 

Another thing to keep i check is lighting. Realtime shadows especially can be extremly expensive... so if a game can get away with baked shadows for static objects, and just a few realtime shadows for characters close to the camera, you can save a TON of graphics OOMPH the GPU can spend on other things. Start adding realtime shadows to multiple lights, and you start to fry even more powerful GPUs.

Then there are different types of renderers for different lighting scenarios. Forward renderers are usually faster, with less overhead than deferred renderers. But try to light a nighttime city scene in Forward, and you either need a TON of clever light faking, or your renderer will slow down the scene quickly.

 

If you can get away WITHOUT lighting, do it! The cheapest lighting is no lights at all. Which is why Matcap shaders which fake lighting, or using vertex colors or textures with baked lighting are pretty common in mobile games. Depending on the game, and the visual style, nobody might notice the missing lighting.

 

 

2. Physics. Can be expensive as hell:

 

The first question is: do you really NEED physics for that specific object or task? Or can you fake it without the player noticing? There are many things a physics engine can do to enhance a game, but overusing it means wasting CPU cycles (and physics is 100% running on the CPU, save all the eye candy PhysX BS exclusive to nvidia cards most developers simply ignore).

For example you see many "developers" (I will not use less nice names) flogging their half baked games on Steam using ragdoll physics for enemies being shot. Looks like crap (Because that is not how people being shot react), and everyone quickly sees its just being used to save them the need to create animations for those events... just switching to ragdoll mode and imparting a force on the character is much simpler to do.

But it wastes precious CPU cycles for an effect that looks like crap. Playing animations is also not exactly free, especially for skinned meshes, but ragdoll physics add physic calculations ON TOP of the skinning cost, so its still a bad idea.

 

 

3. AI... its also expensive.

 

Which is why most newer AI implementation use a fixed "budget" for their AI calculations. If the AI is not finished by when the AI is used up, AI will go with a simpler result. The exact algorithms for that vary, and I don't know too much about it. I guess you reduce the amount of times AI is calculated per second, which might still be enough for the AI to look convincing. Maybe you have an iterative solver just like with physics that gets more accurate with every iteration... then you just take an earlier, less accurate result that might be "good enough".

 

Different AI algorithms have very different runtime costs, sometimes for not so different results. A good AI programmer might know about these different ways to achieve the same result, and might pick a faster algorithm because the game he is implementing the AI for works just as fine with this cheaper AI algorithm.

 

 

4. Lastly, game logic.

 

For most games AFAIK not so much of a problem, as gamelogic often is very simple. Some games, like simulators, still tend to have a rather heavy game logic, which might be tightly coupled to physics and whatnot. In some engines, gamelogic HAS to be on the main thread (Unity for example) if the game logic interacts with the game objects. There, better programming might save a ton of CPU time for other tasks... some game logic can be bundled and pushed to another thread to make better use of multiple cores.

 

 

5. With all the CPU and GPU bottlenecks, you shouldn't forget memory.

 

There are still a ton of plattforms with rather limited memory resources. And depending on the platform, if you run out of memory you are getting a crash... or the system slows down to a crawl because of swapping.

 

Thus keeping the memory usage in check is also a way to ensure your game runs fast and without crashes.

Edited by Gian-Reto

Share this post


Link to post
Share on other sites
So in terms of hitting a specific fps target, it all boils down to optimizing your code in terms of  AI, Collision Detection, and or render code?

 

a profiler or timers are used to determine what code is too slow.  then that code is optimized to run faster.   It may be the type of code you mention (AI etc) or it may be some entirely different sections of code.   never make assumptions about whats fast and whats not.  test and time everything to be sure.

 

 

 

So in terms of PC games, do developers pick their target hardware specification based on the hardware they developed the game on?

 

you start by determining what the "average" user's PC will be when the game is done. if it takes 2 years to make the game, you have to guess what will be the average gaming PC in two years and target that hardware.  This way you don't have minimum system requirements that are too high, yet you have as much hardware power as possible to work with.   Console hardware is not an ever-moving target, and thus much easier to predict what hardware the users will have when the game is released.

 

 

 

Why do some  developers aim for 30 frames per second? Whats the advantage of that? I would think a developer would want to aim for as high a framerate as possible, since if the target is at 30 frames per second then the frame rate would have to be lower than that (To the point were the perception of smooth motion is gone.). Is it due to the complexity of the game and the algorithms used will never allow the game to reach a certain frame-rate over 30? Or is there something else at play?

 

lower frame rates = more time per frame to do more stuff.

 

higher framerates mean doing less, or optimizing code.  nobody wants to cut features out of a game.  and optimizing code is more work.   higher framerates only increase the smoothness of animations.    framerates don't need to be high to be playable, they need to be consistent, IE not drop frames.  The fact is that you only need to run at 15 fps to be playable,  anything faster is just smoother animation, and nothing more.  you can notice a bit more responsiveness at 30 vs 15 fps. but not at 60 vs 30  fps.

 

many folks run render as fast as possible, for smoother animation.  this is the primary driver behind higher framerates.

 

some games are more complex than others, or target slower hardware, and thus have lower target frame rates.   

Edited by Norman Barrows

Share this post


Link to post
Share on other sites

Why do some  developers aim for 30 frames per second? Whats the advantage of that? I would think a developer would want to aim for as high a framerate as possible, since if the target is at 30 frames per second then the frame rate would have to be lower than that (To the point were the perception of smooth motion is gone.). Is it due to the complexity of the game and the algorithms used will never allow the game to reach a certain frame-rate over 30? Or is there something else at play?

I've worked on a bunch of 30fps games and recently some 60fps ones.

Typically a developer re-uses the same technology and workflows from one game to the next -- nothing is from scratch, but built on the shoulders of your previous work.

If the previous game ran at 30fps on an Xbox, that's your starting point for your next game... and if you want your next game to run at 60fps, then that means optimizing your code to run 2x as fast! That's a huge demand! Especially as sequels usually have higher demands -- people expect it to look better, so it's actually higher than 2x performance to take a sequel from 30fps to 60fps :o

 

To get that done quickly, you can reduce the numbers of things -- less characters alive at a time, less animation data, less pixels (lower resolution), etc... but then you're reducing quality...

To get it done properly, you need extremely experienced programmers to rewrite the parts of the game that take up the most time... If a senior engine programmer costs $150k a year, and requires 6 months to re-architect one engine system, and there's a dozen systems that need to be rewritten... that's almost a million dollars worth of code!!! Not to mention the extra time that you need to push back your release date depending on how many staff you have on hand...

 

There's also the issue that the CPU and the GPU run in parallel to each other.

(Generally) All the game code runs on the CPU, and graphics mostly runs on the GPU with a decent amount of CPU work too...

If the GPU workload is 33ms per frame, then it doesn't matter how fast your CPU code runs, you can't possibly hit 60fps -- so you may as well settle for 30fps (solid 30fps feels better than an fps that varies e.g. from 40 to 50).

Likewise, if the GPU workload is 10ms, but the CPU workload is 33ms, then again you're stuck at 30fps... so you may as well triple your GPU workload and bring it up to 33ms per frame as well.

 

For console games with fixed hardware, it theoretically is a tradeoff between frame-rate and fidelity / amount of stuff... but in practice, it also largely depends on how much time/money you're able to spend on re-writing your inherited code...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this