Jump to content

View more

Image of the Day

Boxes as reward for our ranking mode. ヾ(☆▽☆)
#indiedev #gamedev #gameart #screenshotsaturday https://t.co/ALF1InmM7K
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Sign up now

Memory allocation in practice

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 Gumgo   Members   


Posted 27 April 2013 - 04:00 PM

So from everything I've read, game engines are one type of software where having fine control over how memory is allocated is often very important. I often read that during gameplay you ideally don't want to actually allocate any memory - it should all be preallocated when the level loads. This seems like a good idea, but in practice there are many cases where it seems very difficult to judge what the allocation limits should be. I've got some specific examples in mind which I'll list below. Could someone give examples on how these decisions might be made in practice?


(One thing to keep in mind is that in the particular project I'm working on, the levels are procedurally generated.)


- Enemy limits. Normally the player might be fighting maybe 5-6 enemies at once. But what if the player runs around the level and gathers a huge crowd? Would you allocate for worst possible case?


- Similarly, with items. Five players fire bow and arrow as rapidly as possible, and there are a ton of arrow items stuck in walls. Should I calculate "max fire rate" and determine the maximum possible amount of arrows that could be fired and stuck in the wall before they despawned? It seems like it might be really hard to determine these limits on certain gameplay elements. And networked objects just complicate things further, since their ordering isn't guaranteed.


- Network buffers. Messages that are guaranteed are queued up until the ACK has been received. But if there's a network blip, the queue might briefly have to get big.


- Objects that have variable sizes. For example, suppose one enemy has a skeletal mesh with 3 bones, and another has 20 bones. Each requires a "state" buffer. But if I allocate space for at most, say, 100 "enemies", I'd have to assume worst case (they all have 20 bones) if I didn't do some sort of on-demand allocation.


- Strings. Perhaps they should be avoided altogether. Right now for example, I have a file which describes the "sword" item, and an enemy might spawn "sword" when killed. Should these all be preprocessed into indices?


- Components in an entity component system. If it's possible to dynamically add components (e.g. an enemy is OnFire, Frozen, and Poisoned), should you assume the worst and allocate in case all enemies obtain the largest possible combination of dynamic components?


It seems like a lot of these "worst cases" could potentially be pretty hard to spot. Especially if there's any sort of user-provided scripting system. What is usually done to deal with these types of issues?

#2 Aressera   Members   


Posted 27 April 2013 - 05:44 PM

I don't generally worry too much about it. The main thing is to make sure you're not doing a LOT of heap allocations at runtime, so reuse buffers/vectors, or use a pre-allocated memory pool and use your own allocator. The more allocations you do, the more memory fragmentation there will be, and thus can run into problems on embedded systems/consoles where virtual memory is not available.


You're not going to have many problems if you do a few dynamic allocations, the main issue is if you are doing many hundreds or thousands of allocations every frame.


One particular pattern that I've found helpful in reducing dynamic allocations is this:

template < typename T, size_t localCapacity >
class ShortList
			:	storage( (T*)localStorage ),
				size( 0 ),
				capacity( localCapacity )
			if ( storage != (T*)localStorage )
				free( storage );
		void add( const T& newElement )
			if ( size == capacity )
				reallocate( capacity*2 );
			new (storage + size) T( newElement );
		void reallocate( size_t newCapacity )
			// replace current storage with dynamically allocated storage.
		T* storage;
		size_t size;
		size_t capacity;
		UByte localStorage[localCapacity*sizeof(T)];


Basically you have a local storage buffer for a small fixed number of elements which can grow to be any size if necessary. This way, you get the best of both worlds: a statically allocated array, plus the ability to store any number of elements.

#3 frob   Moderators   


Posted 27 April 2013 - 06:44 PM


This seems like a good idea, but in practice there are many cases where it seems very difficult to judge what the allocation limits should be.


This is a solution to a problem.


Do you actually have the problem?  If you don't have the problem then don't use the solution.



On memory-constrained systems, notably handheld devices and game consoles without virtual memory, you need to be extremely careful about managing it carefully. Your game can also be memory-constrained if you come close to the total virtual memory allocation space of 2GB of memory.


Are you on such a system?  For example, are you on a device with 4MB of space? Or on older versions of Android with 16MB of maximum heap space?  Or a console like PS3 or X360?


If you are on a PC, are you anywhere close to the two gigabyte heap limit?






If no, then you probably don't need the solution.




Assuming you actually do need that type of solution, part of the design process should be to figure out exactly what the pools should be, and for each pool figure out the approximate budgets for space.

Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I occasionally write about assorted stuff.

#4 iMalc   Members   


Posted 28 April 2013 - 03:35 AM

There are two relevant problems here that one could run into:

1. Too many allocations happening per second: In this case you would probably first discover this was a problem through profiling, and you would target the low-hanging fruit, changing only those things to use a memory pool or fixed-size buffer.

2. Memory fragmentation: Harder to detect and harder to deal with, but the solution is more of the same.

And of course, avoid unnecessary allocation. E.g. don't do dynamic memory allocation for things that are small and only exist for the lifetime of a function call. That's what the stack is for.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

#5 Gumgo   Members   


Posted 28 April 2013 - 11:25 PM

Thanks for the replies. My target platforms are PC/Mac and iPad. I'm certainly glad to hear that these allocation issues are less of an issue on platforms with virtual memory. From what I've read, it seems like the iPad virtual memory setup is fairly similar to that of a PC, with the exception of swapping memory to disk (which shouldn't happen in a game anyway). Can anyone comment on this?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.