Jump to content

  • Log In with Google      Sign In   
  • Create Account


Relation between memory used and frames per second


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 buumchakalaka   Members   -  Reputation: 268

Like
0Likes
Like

Posted 22 December 2012 - 02:12 PM

Hi, I have a doubt about optimization. 
Until now I dont care about load more than one time a image. So, for example, if I had two classes that need the same image I load two times instead of load one and share two pointers to the image. 
But now I need to increase the speed of my games, so I have a cuestion: What is the relation between memory and fps? I can get more Fps wasting less memory, or it is only usefull when I have a limited memory space like in a mobile phone?

 

Thanks in advance for the help! 

 



Sponsor:

#2 rip-off   Moderators   -  Reputation: 8164

Like
3Likes
Like

Posted 22 December 2012 - 02:44 PM

In general, the less memory your program touches the faster it will be. In general, the faster your program completes it's game loop, the more frames it can paint to the screen in a given time period.

However, there is no guarantee that reducing memory will increase frames per second. The relationship between these two is complex, because it ignores a lot of other things your program is doing, such as computation and other I/O. In addition, if you are loading the image on demand multiple times, that disk access is also going to have an impact on how fast your game loop runs.

To systematically optimise, and get the best results for the time spent, you need to identify your bottlenecks. Using a tool such as a profiler, or cruder techniques like inserting code to time certain blocks, find where most of the time is being spent. Then, think of an alternative implementation that you believe will reduce the amount of time spent in these bottlenecks. Finally, measure that the changes have actually had the desired affect.

Your alternative implementation should focus on high level gains, such as improved algorithms and better memory layout before resorting to lower level changes.

In the absence of such analysis, I would expect that eliminating unnecessary disk access and removing redundant data will probably help, at least to some degree.

Edited by rip-off, 22 December 2012 - 02:45 PM.


#3 Matias Goldberg   Crossbones+   -  Reputation: 3205

Like
1Likes
Like

Posted 22 December 2012 - 03:34 PM

To what rip-off said, it also depends on whether you allocate & deallocate memory too often per frame (even regardless of the amount) or allocate once and then reuse & share.

And if you're working with managed code that has garbage collection, that's a whole different world on how memory usage patterns may impact framerate.



#4 ic0de   Members   -  Reputation: 839

Like
3Likes
Like

Posted 22 December 2012 - 03:39 PM

Hi, I have a doubt about optimization. 
Until now I dont care about load more than one time a image. So, for example, if I had two classes that need the same image I load two times instead of load one and share two pointers to the image. 
But now I need to increase the speed of my games, so I have a cuestion: What is the relation between memory and fps? I can get more Fps wasting less memory, or it is only usefull when I have a limited memory space like in a mobile phone?
 
Thanks in advance for the help! 

Often it works in the reverse, you can drastically increase you fps sometimes when you increase you memory usage. For example it is faster to store vertex normals in memory rather then generating them each frame. One way to optimize is to find a process that is repeated continuously and store the result so you only have to do it once. Or sometimes a data structure will have extra data in it that will increase memory usage but it may be faster to traverse therefore increasing fps. In the example you gave the performance should be almost identical except at load time in which case loading one image is obviously faster.

you know you program too much when you start ending sentences with semicolons;


#5 snowmanZOMG   Members   -  Reputation: 848

Like
5Likes
Like

Posted 22 December 2012 - 07:54 PM

In my experience, how much memory you use matters very little as long as you're not paging out to disk.  What matters more is how your memory is laid out and the subsequent access patterns to those layouts.  If you don't access memory in a cache friendly way, your performance will die.



#6 buumchakalaka   Members   -  Reputation: 268

Like
0Likes
Like

Posted 23 December 2012 - 06:25 AM

So, the best can I do is to identify where im lossing speed and try to optimize this parts of code?

Im reading about profilers right now, but it seems to be more complex than I expected.

Do you know any good and simple profiler for start to learn about this?

 

Also Im using SDL, maybe is the moment for start with OpenGL?

 

Thanks you!



#7 snowmanZOMG   Members   -  Reputation: 848

Like
3Likes
Like

Posted 23 December 2012 - 07:40 AM

You should definitely profile.  Optimizing without a profiler is just hopeless and foolish for all but the simplest of programs.  What profiler to use depends on your system.  I haven't used Visual Studio in quite some time, but if you're using that then you may be able to use the Performance Analyzer.

 

Another thing you could do is to add timers to your code; enclose pieces of code you want to time so you can figure out how long that portion took.  Be sure to also take note of the total frame time as well so you can see how much time that portion takes up in relation to the entire frame.

 

If you want a poor man's sampling profiler:

 

http://stackoverflow.com/questions/375913/what-can-i-use-to-profile-c-code-in-linux/378024#378024

 

The profiler will tell you what portion of the code to focus your attention on, but it doesn't really tell you anything about why something is slow.  You need to at least have an understanding of algorithms and computer architecture (probably also a little dash of operating systems) to be able to know what that "why" is.  Typically, people inspect the algorithm first, since that usually yields the largest gains for relatively minimal effort.  A terrible algorithm replaced with a good one can yield huge gains, especially as problem sizes increase.  But once you're at a fast algorithm, you may be stuck at a wall that's limited by your particular implementation.  This is where computer architecture knowledge often becomes useful.  People then proceed to optimize out slow instruction sequences with fast ones and also rearrange data to allow for faster access.  Sometimes, people flat out "cheat" because they know something specific about the problem and can precompute things and start some computation further along because of those precomputed results.

 

The top answer to this question gives a pretty good account of how optimizations usually go: http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort


Edited by snowmanZOMG, 29 December 2012 - 07:23 PM.


#8 Steve_Segreto   Crossbones+   -  Reputation: 1517

Like
1Likes
Like

Posted 23 December 2012 - 02:21 PM

One thing to keep at the top of your head is that bad cache behavior in your program can result in an orders of magnitutde slowdown compared to the same bad cache behavior from ten years ago!

 

This is because the disparity between processor speed and memory speed has increased exponentially since then. So today more than ever it is important to access memory in cache friendly ways.

 

Of course, you should use a profiler to help you identify these spots. But sometimes you can imagine them, for instance if you access data in a "striped" fashion instead of a "linear" fashion, you will likely result in cache misses.



#9 rip-off   Moderators   -  Reputation: 8164

Like
1Likes
Like

Posted 24 December 2012 - 08:38 AM

The purpose of my post was to demonstrate how one optimises like an engineer, rather than a hacker. Yes, it is complex.
 
However, for this case the advantages are obvious. It is wasteful to load the files multiple times, and it is wasteful to have multiple copies of the same image in memory. You should fix this anyway.


#10 buumchakalaka   Members   -  Reputation: 268

Like
0Likes
Like

Posted 29 December 2012 - 03:37 PM

One thing to keep at the top of your head is that bad cache behavior in your program can result in an orders of magnitutde slowdown compared to the same bad cache behavior from ten years ago!

 

This is because the disparity between processor speed and memory speed has increased exponentially since then. So today more than ever it is important to access memory in cache friendly ways.

 

Of course, you should use a profiler to help you identify these spots. But sometimes you can imagine them, for instance if you access data in a "striped" fashion instead of a "linear" fashion, you will likely result in cache misses.

 

In my case I think the problem with the speed is that, a bad memory access, because the problem is very random. Often the game run perfect and suddenly it become very sloooow. 

I have many cuestions about this:

What do you mean with: "access memory in cache friendly way"? Can you explain with a simple example?

 

About the profilers, if I write my own little proffiler like a class that mesure the time and the number of calls in each frame will be enougth for simple optimizations like this?



#11 Matias Goldberg   Crossbones+   -  Reputation: 3205

Like
3Likes
Like

Posted 29 December 2012 - 05:07 PM

These papers explain all that you need to know about accessing memory in cache friendly ways, and why it's so important. I've sorted them by difficulty of understanding (IMHO).

 

Multi-core is Here! But How Do You Resolve Data Bottlenecks in PC Games?, Michael Wall (AMD), GDC 2008

http://developer.amd.com/wordpress/media/2012/10/AMD_GDC_2008_MW.pdf

 

Pitfalls of Object Oriented Programming, Tony Albrecht (SCEE), GCAP 09
http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf

 

CPU Caches and Why You Care, Scott Meyers, Ph.D.; ACCU 2011 Conference.
http://www.aristeia.com/TalkNotes/ACCU2011_CPUCaches.pdf



#12 buumchakalaka   Members   -  Reputation: 268

Like
0Likes
Like

Posted 30 December 2012 - 01:42 PM

Thanks you! With this information I can start researching about memory optimization.

Edit: Just the last cuestion,
how can memory leaks affect the preformace of the aplication?

Edited by buumchakalaka, 30 December 2012 - 04:40 PM.


#13 snowmanZOMG   Members   -  Reputation: 848

Like
1Likes
Like

Posted 31 December 2012 - 07:45 AM

It depends on the platform.  On console platforms, it's bad.  You almost cannot leak any memory at all, because consoles don't have a sophisticated virtual memory system that modern desktop PCs do.  When you run out of memory on a console, you just crash and burn.  But it's unlikely you're working on a console.

 

Desktop PCs, because of their fancy virtual memory systems, memory leaks are still bad but they're not nearly as catastrophic as on consoles.  The amount of "fast" working memory for the operating system to hand out to running processes will slowly decrease and it will start to page out processes onto the disk, which is extremely slow.  I like to think of memory use on desktop PC platforms as mostly about being what I like to call "A good software citizen".  Use your share of memory; don't be greedy and send back what you don't need.  If you do end up using too much, it's usually not the end of the world, but everyone is going to hate you, especially the user of the computer.



#14 buumchakalaka   Members   -  Reputation: 268

Like
0Likes
Like

Posted 31 December 2012 - 11:12 AM

Thanks, im not working on a console but its an option for the future, so its good to know this.



#15 snowmanZOMG   Members   -  Reputation: 848

Like
1Likes
Like

Posted 31 December 2012 - 09:53 PM

You really should strive to have no memory leaks though.  Even though desktop platforms can be forgiving, it can be a huge nuisance to end users if you leak memory since it will cause the whole system to drag even though it's just your application that's leaking.  Mozilla has been fighting this exact fight for years now, and only recently have they made significant improvements to the way Firefox reclaims memory, specifically from addons, which are a prime source of memory leaks.

 

There are some good points made in this GDC 2008 presentation by Elan Ruskin about a bunch of things related to game development: http://www.valvesoftware.com/publications/2008/GDC2008_CrossPlatformDevelopment.pdf.  I would highly recommend taking a look at slide 25 and onwards for some treatment of memory in games.  A lot of those things are really only necessary for large studios or games that really push limits of systems, but there are a lot of useful tips in there, such as knowing where all the memory is going.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS