• Advertisement
Sign in to follow this  

Should i prioritize CPU or Memory usage?

This topic is 1236 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I know this is a rather ambiguous question, and the answer will highly depend on the software at a hand, but generally speaking if i have a situation where i could prioritize the CPU or the Memory, which one should i choose, or which one do you generally choose?

 

For example: If i create a code that will load certain areas of the map around the player, i have two options

a) The size of each area can be bigger, therefore storing more data on the memory, but i wouldn't have to load each area so often. 

b) The size of each area can be smaller, therefore, i would store a lot less Data on the Memory, but i would have to reload different areas much more often.

 

What do you generally do in this instances?

Share this post


Link to post
Share on other sites
Advertisement

The thing is this completely depends on the situation.

 

To put an example: in my game sound effects are uncompressed in RAM, because they're small and also several of them can be playing simultaneously, so I trade off RAM usage in favor of CPU usage. On the other hand, background music would be quite big if uncompressed, so instead I keep it compressed and then decompress it on the fly as it's playing (which isn't a big problem because only one can play at any given time). In this case I traded CPU usage in favor of RAM usage. Ultimatey in both cases it was a matter of what was the better balance of both resources.

 

As for your case: first of all if you stream your real bottleneck will be the drive and the filesystem driver, so there's that =P You could load the data compressed and then uncompress it only when needed. But first ask yourself if the format of the data is optimal: if you're storing lots of small values, it'd make more sense to store them as bytes and not as ints (as a bonus, you reduce the chance of cache misses so it ends up being a CPU optimization as well!). Maybe that change alone would reduce RAM usage enough to make compression or streaming useless, and it'd be a lot easier to implement.

Share this post


Link to post
Share on other sites

I know this is a rather ambiguous question, and the answer will highly depend on the software at a hand, but generally speaking if i have a situation where i could prioritize the CPU or the Memory, which one should i choose, or which one do you generally choose?

 

For example: If i create a code that will load certain areas of the map around the player, i have two options

a) The size of each area can be bigger, therefore storing more data on the memory, but i wouldn't have to load each area so often. 

b) The size of each area can be smaller, therefore, i would store a lot less Data on the Memory, but i would have to reload different areas much more often.

 

What do you generally do in this instances?

You almost always have plenty of memory, but accessing a lot of memory will make your program slow. So it's not really a trade off between memory and CPU speed. It's a question of which option makes for a faster program.

 

Your example isn't a trade of between memory and CPU at all. It's a trade off between up front loading, versus delayed loading. That is, with smaller map chuncks, you can initially load the map faster, but have to load data more often. You wouldn't want to offload data too often, even for small maps, because that's wasted effort; having extra memory allocated but not accessed won't slow anything down. You may need to offload map data to disk so you don't run out of memory, but generally you can just do that during a save. There may also be questions of what how big a map chunk should be to offset the constant cost of loading anything at all.

 

For actual cases of CPU processing versus using more memory, doing more calculations often wins out. Main/Physical memory is about 100 times slower than doing an arithmetic operation, or accessing the first level of cache. Furthermore, on a multi threaded program on a multi core machine, access to main memory isn't done in parallel.

Edited by King Mir

Share this post


Link to post
Share on other sites

What you want is for your program to work as intended and not have any negative effects on the system.

 

Too high CPU/GPU use means it doesnt work. Too high memory use means it doesnt work.

 

But thats not all. You also need to take into account the context. If your program uses all of the RAM, it might work, but what if the user wants to run another program too? What if the user has a different amount of RAM than another user? What if the user doesnt want your program to constantly use the HDD because it wears it out or prevents the user from doing video capture or whatever? What if the user has a laptop and too much processing makes the battery run out fast or forms too much heat?

 

So for your map loading example, you want to make it adjustable. Youre probably not on a console, so every user and their machine is different. Thats why we have graphics options in practically all games (and a lot of them).

 

For the stuff that doesnt make sense to be adjustable, you need to predict what kind of resources the application will use and how much, and try to balance it all out so that nothing ends up being a huge issue.

 

Generally this is easy, since one of the approaches is usually easier than the others, so you can just use that one and come back to it later if required. If the approaches are vastly different in every aspect, it is more difficult since choosing the wrong one could be a complete waste of time (since you might need to reimplement the whole thing, not just make a small addition or tweak). For those situations you just need to have experience and know your projects requirements and make an educated guess.

Share this post


Link to post
Share on other sites

if you saw what OS do when you release memory , you would not free it just like that anymore (it keeps the chunk as unavailable memory for 10 secs, - 60 percent of time realocating the same type onto it)

 

Unless there is a huge amount of data, being needed only some of it-still large- in longer periods of time, then it is a well decision (see  Sik_the_hedgehog post for great example)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement