Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

emblemstuart

Memory Management?

This topic is 5477 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Many game developers allocate a large chunk of heap memory during initialization to store game data in and perform their own memory management within that chunk. I''m not sure why this is a good idea. It seems to me the operating system is better equipped to deal with fragmentation than an application could be. If the interest is in maintaining an upper bound on memory footprint that''s easy to do with a lightweight system that still allows the OS to do allocation for each resouce. Can someone explain to me why it''s a good idea to allocate a contiguous chunk of memory for my application? Thanks! Stuart O. Anderson

Share this post


Link to post
Share on other sites
Advertisement
It probably has to do with the slowness of allocating memory.
Saves time to just assign a chunk of allready allocated memory then to allocate new memory from the system!

Correct me if I am wrong please dudes!

My game: Swift blocks

Share this post


Link to post
Share on other sites
I should do some speed tests I guess. Is it common to need to allocate new resources during gameplay? How do you deal with fragmentation efficiently?

Share this post


Link to post
Share on other sites
No it depends on the amount of dynamic memory allocation per frame for small structrures. I have replied in the other thread about memory allocatoin. Malloc is very slow.

Use several chunks of memory. Probably dunamically to let the OS use it''s capacity of defragmentation (through the logical adressing of the CPU i.e. renaming of physical segments of memory) but only at a high scale, not for small blocks of memory like structure elements.

Google search : pool memory allocation.

Share this post


Link to post
Share on other sites
quote:
Original post by emblemstuart
Many game developers allocate a large chunk of heap memory during initialization to store game data in and perform their own memory management within that chunk. I''m not sure why this is a good idea. It seems to me the operating system is better equipped to deal with fragmentation than an application could be.

Yes, but the application programmer knows more about what kind of memory is going to be allocated than the operating system does. The OS needs a general purpose solution which is great for 90% of circumstances but easy to beat if you have enough information. It''s not just about fragmentation either; it''s about getting all the hard work done at the start of the program so that the allocations during the game don''t take too long.


[ MSVC Fixes | STL Docs | SDL | Game AI | Sockets | C++ Faq Lite | Boost
Asking Questions | Organising code files | My stuff | Tiny XML | STLPort]

Share this post


Link to post
Share on other sites
Yup.

Also the allocator in the OS (and in the C runtime) can''t make any assumptions about usage patterns - its very general case because it has to work equally well with a business application as it does with your game. It doesn''t always do the most ideal thing for a game, particularly when memory is released.

It''s rare for allocations to be made during gameplay in a game, usually most are made in a big chunk when the level is loaded and released when the level is unloaded.

Games are often also concerned with locality of reference for most efficient use of their pages and the data cache. A pool based allocator that allocates scene graph nodes near to each other in memory for example is better than a general purpose allocator that scatters that data around memory. Good locality of reference means higher in game frame rates!

Games often need to allocate temporary buffers in between more permanent allocations - with a traditional allocator this introduces potential fragmentation OR longer memory release times.

Also having your own allocator means you can attach debug information to it that''s useful to your game.

Generally - you have more knowledge of your use of that memory than the basic allocators do.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
Has anyone done a comparison for a full scale game? That is, tried running with stdlib allocators and then recompiling with custom memory pool allocators? I wonder how big a difference this makes with a modern OS running a typical game?

Share this post


Link to post
Share on other sites
One reason could be memory leaks. You use your own memory pools then at the end of the level or whatever, you just blast it. Wee! No worries about missing that one rogue free().

Another reason is knowing exactly how much memory various systems have to work within and enforcing it. Just saying "Your game objects should only be X MB large" to an artist doesn''t work if the game will just ignore that and malloc whatever they require.

Another is fragmentation issues. PlayStation2 has no built in memory management, so calling malloc/free or new/delete willy nilly will cause fragmentation. Things must (generally) be freed in the reverse order they were malloc''d. That''s a pain sometimes.

There''s probably more (and better) reasons but those are a few I can think of right off the top of my head.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
emblemstuart here -

(in reply to lorddeath)

pooled memory is _not_ a good way to deal with unintentional memory leaks. If you want to do that it would be simpler to put a wrapper around the system malloc.

same goes for limiting resource sizes - that''s just a couple lines of code to do in combination with a system malloc to get the exact same behavior.

You''re right about console system of course (I''m only familiar with the GBA though)

I can see three reasons for pooled memory at this point:
1. A general speedup if system mallocs are avoided - a quick google indicates that this is generally only true when you''re allocating a lot of small structures frequently. Games seem to allocate a bunch of stuff once when loading and then keep it in memory during gameplay. I''ve read articles about console games needing to stream data from CD quickly, but on consoles this doesn''t seem to be an issue anywhere but the video cards (and less so there now than a couple years ago)

2. Cache coherency - This is definitely a benefit, but I''d need to see some hard data before I believed the speedup justified th effort.

3. Avoiding blocking calls to malloc. This is, in my mind, the most convincing reason not to use system malloc during gameplay. The OS could take a long time to process a particular request if it tries to do a coalesce or somesuch as part of the call. Again, I''ll run some tests when I have time to see if this could cause a noticable hiccup in framerate.

Stuart

Share this post


Link to post
Share on other sites
My boss was talking about do this for the PS2. I dont know how to do what he was saying. Our editor need to run on PCs.

Allocate a large block of memory.

Make all allocations in that block of memory.

Save that block of memory(any time).


You just saved your gamestate, ready to load again.

He mentioned fixing the RVAs. I''m not sure how he would do that.

In a console, these gamestates would go on the CD. The level would startup in progress.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!