• Advertisement
Sign in to follow this  

What improves better memory performance?

This topic is 1961 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


I took that sentence as a rhetorical question -- as in, these calls do god awful stuff behind the scenes!

And it may have been intended in that light, but it is still incorrect. There is nothing 'god awful' going on in a garbage collector - it's a deterministic process, much like everything else in computer science.

There are, of course, performance tradeoffs involved, and various pitfalls you need to be aware of, but there is nothing intrinsically evil about garbage collection.

Share this post


Link to post
Share on other sites
Advertisement
Theres a big difference between
List<Callback> toBeCalledAtLevelEndLeaveMeAloneUntilThen;
and
Graph<RootObject> traverseMeEveryFrameMarkingReachedNodes;
List<EveryObject> traverseMeEveryFrameLookingForNonMarkedNodes;

Yes I've pitched the of simplest memory management against the dumbes non-generational mark-and-sweep GC, but the point is that for a lot of problems GC's or even smart-pointers are complete over-engineering. In my made-up straw-man, it's the difference between zero overhead per frame, and several milliseconds of cache-misses per frame, for no effect.

It's not evil, it may just be a lot more complex than is actually required.

Being a systems programmer, when I see anything with random-access memory patterns, such as a GC traversing an graph of all of your objects, it does conjure up the description of "god awful". On my last game we used Lua, which has a very simple GC. We had to customize it quite a bit, and then also re-write a lot of the Lua code to minimize garbage generation, in order to avoid random 8ms spikes in frame-times. We also ran it on a hyper-threaded core during rendering to try and soak up the GC cost for free, which helped, but it also trashed the cache-lines that the renderer was using, slowing it down too.

I'd still use them in the right circumstances, but with guidelines and care! Edited by Hodgman

Share this post


Link to post
Share on other sites

Being a systems programmer

In my mind, this is really the crux of the matter. I don't disagree with any of your points, but I also am sure that you know enough to correctly implement an alternative to GC (i.e. careful manual resource disposal, or a robust scoped/smart_ptr solution).

A lot of programmers don't have that background, and a surprising amount of the time the garbage collector actually beats naive attempts to manually manage memory (i.e. for many small allocations, it performs much more like a pool allocator than does malloc/new).

Share this post


Link to post
Share on other sites

A lot of programmers don't have that background, and a surprising amount of the time the garbage collector actually beats naive attempts to manually manage memory (i.e. for many small allocations, it performs much more like a pool allocator than does malloc/new).
Yep, and also avoids all the other nasties like leaks, dangling pointers and random memory corruption biggrin.png

In some circumstances, Keep It Simple Stupid might mean "just use the damn GC and don't try and reinvent the wheel", and in other circumstances, KISS might mean "oh god why are you using another complex tree structure when you don't even need to be managing resources to solve this problem".

As usual generalisations turn out to not be useful unsure.png

Share this post


Link to post
Share on other sites
See also:
Fixed-Size Block Allocator (FSBAllocator): http://warp.povusers...ator/
Boost.Pool: http://www.boost.org/libs/pool

According to the FSBAllocator's benchmark using either the Boost.Pool allocator or the FSBAllocator might yield significant speed-ups over the default allocator for select cases (e.g., many small allocations that swiftcoder mentioned): http://warp.povusers...ator/#benchmark

IMHO, the fact that you can do that (just plug-in a specialized allocator when you need it / when the defaults don't work for you) is a significant advantage over the one-GC-fits-all solutions (where, when the defaults don't work for you, the best you can do is to experiment with GC tuning). Edited by Matt-D

Share this post


Link to post
Share on other sites

Well it would 'be nice' to be lazy and leave everything to a GC,


Sometimes the developer (or team) may not be quite skilled enough to have a choice and I hope to god that if working in a team, I wouldn't have to use some bodgy monstrosity.

In my spare time, I port software to FreeBSD and notice quite a few cases where developers have tried to hand roll their own stuff, nothing flags up bugs in this type of thing better than porting to an entirely new platform (and older version of GCC). So I suggest using a garbage collector unless you really know how to use the language properly.

While I havn't properly touched managed languages for well over 4 years, the Boehm GC works satisfactory on C++.

For software which needs no clever memory management however, the only solution is tr1/shared_ptr!. Edited by Karsten_

Share this post


Link to post
Share on other sites

For software which needs no clever memory management however, the only solution is tr1/shared_ptr!.

Or if you're using C++11, any of C++'s smart pointers (not in tr1).

Share this post


Link to post
Share on other sites

[quote name='lawnjelly' timestamp='1347018916' post='4977578']
Well it would 'be nice' to be lazy and leave everything to a GC,


Sometimes the developer (or team) may not be quite skilled enough to have a choice and I hope to god that if working in a team, I wouldn't have to use some bodgy monstrosity.

In my spare time, I port software to FreeBSD and notice quite a few cases where developers have tried to hand roll their own stuff, nothing flags up bugs in this type of thing better than porting to an entirely new platform (and older version of GCC). So I suggest using a garbage collector unless you really know how to use the language properly.

While I havn't properly touched managed languages for well over 4 years, the Boehm GC works satisfactory on C++.

For software which needs no clever memory management however, the only solution is tr1/shared_ptr!.
[/quote]

Yup, don't get me wrong, in almost the majority of apps I'd be all for using all the tricks in the book to make things simpler. Garbage collection, you name it.

Sorry if I come off as opinionated on the subject, I was a bit unfair on you Karsten .. I've had to deal with the mess caused in the past and it's not been pleasant. It's not very fair when people's jobs are on the line, and their families depending on them etc.

It's just in the specific case of (professional) games, particularly on fixed low memory devices (and some other software on embedded systems), my personal belief is that controlling the memory yourself can be the best option. That doesn't mean it's necessarily the best approach for people learning .. it's more an approach for making a solid professional product.

The two main reasons I would argue for this are:

Stability
Predictable timing

Stability - no worries about failed allocations .. your game will run each time, every time, no matter how many levels you load, what combinations of objects need to be loaded. There's no, ah but if character B walks round the back of building A, carrying object C and opens the door on level BLAH, then it crashes. Sometimes. Which is pretty much what you don't want to hear about when you are trying to ship something. Or what happens if someone is running such and such a program in the background in a multitasking environment.

Of course it's possible you could get round this to some extent with your Garbage Collection system - if it can allow you to pre-reserve your memory, (depending on its implementation regarding fragmentation), and if you keep a tight handle on your numbers of various objects. But once you get to this extent you are almost doing the work of doing it yourself anyway.

The other is that there is no question over the time taken over a deallocation / allocation. It is determined by your code and can be tightly determined - usually a constant very short time. There's no worry about dropping frames etc. Using a third party allocation / deallocation system leaves you at the mercy of their implementation. That's not to say there aren't good implementations, but there are also bad ones, and worst cases. Windows for example is quite happy to grind to a halt and do some disk swapping when it thinks it's necessary during an allocation / deallocation.

I fully understand that it can be a bit of extra effort (sometimes quite a bit) to manage memory yourself, although it's usually mainly a one off cost setting up your project. But development isn't just the time putting the code together, it's also beta testing, trying lots of different scripts, game levels, combinations of factors. In this situation the more potential problems you can remove the better.

If you are working to a time schedule with milestones and a budget and staff costs to pay, the last thing you want is some vague uncertainty over 'yeah it may take 2 years to beta test this thing'. That's one of the (several) reasons why games get canned / companies go under.

But anyway at the end of the day it's up to whoever is technical lead on a project to make these kind of decisions. Right I'm tired that's enough essaying it's bedtime! smile.png

Share this post


Link to post
Share on other sites

The other is that there is no question over the time taken over a deallocation / allocation. It is determined by your code and can be tightly determined - usually a constant very short time.

So, I mostly agree with the rest of your post, but this point isn't quite as straightforward as you suggest.

Malloc/new are not deterministic. The cost of an individual allocation is generally much higher than that of a garbage collector, and it is not a fixed cost. But you do get the (to my mind, dubious) benefit that the performance cost is incurred at the call site (whereas garbage collection incurs a performance cost at an indeterminate later date).

If you actually need deterministic allocation cost, then you have to go with other solutions (probably ahead-of-time allocation: pool allocators, SLAB allocators, etc.)

Share this post


Link to post
Share on other sites

Or if you're using C++11, any of C++'s smart pointers (not in tr1).


Agreed, although I did mention I was using an older version of GCC (due to the old BSD compatible license). When possible I will always take advantage of newer features of the C++ language!


Sorry if I come off as opinionated on the subject, I was a bit unfair on you Karsten .. I've had to deal with the mess caused in the past


Heh, no worries. I seem to be on the wrong side of this argument anyway because I am usually the first to advocate the use of manual memory management, RAII and simple clean solutions. ;)

I find deterministic destruction plays a much bigger part in my software than simply cleaning up memory too. For example if a unit of execution (i.e a thread) is running within a class (or containing references to), then this will never be flagged for disposal by the GC. This I find is quite a critical design flaw within most GC languages since what I really want to happen is once the object goes out of scope, the class should join the thread and deallocate in an elegant exception safe manner.
The only .NET language that seems to support this is C++/CLI since you can use auto_handle<T> as a means to implement the RAII pattern.

Slightly offtopic...
I dont know if anyone else noticed that Apple has recently deprecated garbage collection in 10.8 for their objective-c. Quite an interesting decision showing that perhaps they feel that manual memory management (or reference counting) isn't much harder than relying on a GC or at least that the performance will be superior etc....
http://developer.app...troduction.html

"Garbage collection is deprecated in OS X Mountain Lion v10.8, and will be removed in a future version of OS X" Edited by Karsten_

Share this post


Link to post
Share on other sites
"INTERIOR DEBATE ROOM - NIGHT TIME"

AS THE SMOKE CLEARS, A BIT OF HAZE STILL DRIFTS JUST LINGERING ON THE FLOOR.

Is everyone finished? Cool, Group Hug Everyone! Com'on!

Share this post


Link to post
Share on other sites

"INTERIOR DEBATE ROOM - NIGHT TIME"

AS THE SMOKE CLEARS, A BIT OF HAZE STILL DRIFTS JUST LINGERING ON THE FLOOR.

Is everyone finished? Cool, Group Hug Everyone! Com'on!

You act like we're fighting... So far, I'd say this has been a very civil discussion :) People are just giving various opinions and technical facts, which will hopefully leave any future readers further enlightened, albeit perhaps no less undecided.

Share this post


Link to post
Share on other sites

I dont know if anyone else noticed that Apple has recently deprecated garbage collection in 10.8 for their objective-c. Quite an interesting decision showing that perhaps they feel that manual memory management (or reference counting) isn't much harder than relying on a GC or at least that the performance will be superior etc....
http://developer.app...troduction.html

"Garbage collection is deprecated in OS X Mountain Lion v10.8, and will be removed in a future version of OS X"

Not quite.

Yes, Objective-C garbage collection is being removed (and was never that widely used to begin with, partly because of the lack of iOS support).

However, as your link indicates, Apple is replacing it with a system called 'ARC' (Automatic Reference Counting). Effectively, they have modified their compiler to spit out all those retain/release calls for you, and it does a much more reliable job of it than a human could.

I wouldn't really call that 'manual memory management'. It's still a fully automated garbage collector, just one based on internal reference counting (similar to Python's old garbage collector).

And sadly, it suffers from the age-old deficiency of reference-counting systems: the need to explicitly annotate weak references. Edited by swiftcoder

Share this post


Link to post
Share on other sites

[quote name='lawnjelly' timestamp='1347050343' post='4977804']
The other is that there is no question over the time taken over a deallocation / allocation. It is determined by your code and can be tightly determined - usually a constant very short time.

So, I mostly agree with the rest of your post, but this point isn't quite as straightforward as you suggest.

Malloc/new are not deterministic. The cost of an individual allocation is generally much higher than that of a garbage collector, and it is not a fixed cost. But you do get the (to my mind, dubious) benefit that the performance cost is incurred at the call site (whereas garbage collection incurs a performance cost at an indeterminate later date).

If you actually need deterministic allocation cost, then you have to go with other solutions (probably ahead-of-time allocation: pool allocators, SLAB allocators, etc.)
[/quote]

Ahha .. this may be where the confusion lies.

I didn't want to suggest 'using malloc / free at runtime is better than garbage collectors'. Far from it... they both have related downsides.

In c++, if you override new, you don't need to use OS calls for memory management. You can use whatever system you want for grabbing memory from wherever you want, then you have the opportunity to call the constructor yourself with placement new. smile.png

In addition there is a distinction between one off allocation / allocations at startup, and their corresponding deletion at shutdown, and dynamic use (i.e. the kind of things you might use lots of times in a frame). The second case is what we are interested in here. For actually reserving your memory at startup, you could use whatever you want .. an OS heap, garbage collected system. Ultimately your memory has got to come from somewhere. happy.png

(There is also the slightly less stringent case of level load / unload, where you *could* if necessary be a bit more lenient / take some shortcuts on some platforms).

What we are after in games, in an ideal world, for dynamic allocation (things that happen a lot rather than just startup and shutdown) is stability (no failed calls) and constant time (and fast) allocation and deallocation. ph34r.png

Sorry I should have been more clear on this. I would on the whole use things like fixed size memory allocators (and potentially other constant time allocators) for things that need to be created / destroyed dynamically (see my first post on page 1). You can use this for constant time incredibly fast allocations / deallocations, suitable for things like nodes in algorithms, even particle type systems.

For things that are truly variable size (levels etc) the tradeoff can be to prereserve space at startup for worst case, and work with that. Alright you lose a bit from the theoretical maximum, but you gain in simplicity and stability. On levels with not much geometry, you can e.g. add more sound, or more textures, and vice versa. For your level file you can prepack into the best format possible, with zero fragmentation, and make use of the whole of your budget in megs. If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

You can do this for GPU resources too .. reserve e.g. 5000 verts for a character and then stick to that budget or lower for your artwork, and you can guarantee they will always fit in that 'slot'.

You can also pre-designate blank 'slots' for various items in the level data RAM allotment to give more flexibility, if it seems a better idea than deciding ahead of time the maximum number of item 'blah'. If you do this you get the benefit of zero fragmentation, and best use of memory for that level.

In short there are lots of handy 'helper' bits of functionality offered to programmers, like 'general purpose' heaps, variable size strings etc. There are whole languages dedicated to making things 'easier' for the programmer where these things are a given (basic, php etc etc). In most situations this is a real benefit because it makes you much more productive as a programmer - less code, simpler code, less potential for bugs, and the 'costs' are not going to appear to the user.

It's just that in some situations, particularly time critical applications, and those on limited memory devices, it can become worth it to not use some of the helper functionality. An extreme example would be missile control software. You might have limited memory. If your program crashes, people die. If your program takes too long to faff around restructuring the heap, people die. It's only if it works predictably and as per spec that the right people die.

Other examples where you have to be a bit more stringent include things like financial software, medical software, some engineering software.

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours. ph34r.png

(edit) Some good search terms to google in this area are : 'real time programming', and 'mission critical programming'. (/edit)

Share this post


Link to post
Share on other sites

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.
C++? I'd hope they'd instead use a less error-prone language like Ada tongue.png wink.png
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.
If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).[/quote]Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had the same maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.
For things that are truly variable size (levels etc)
IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines ofstruct Widget
{
char* name;
Vec3 position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
I'd instead use:struct Widget
{
Offset<String> name;
Vec3 position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:0 0x00000002 //WidgetFile::widgets::count
4 0x00000028 //widgets[0].name: 40 byte offset to {5,"Frank"}
8 0x00000000 //widgets[0].position.x
C 0x00000000 //widgets[0].position.y
10 0x00000000 //widgets[0].position.z
14 0x00000000 //widgets[0].parent: NULL
18 0x0000001B //widgets[1].name: 27 byte offset to {3,"Bob"}
1C 0x00000000 //widgets[1].position.x
20 0x00000000 //widgets[1].position.y
24 0x00000000 //widgets[1].position.z
28 0xFFFFFFDC //widgets[1].parent: -36 byte offset to widgets[0]
2C \5Fra //*widgets[0].name
30 nk\0\3 //*widgets[1].name
34 Bob\0
I would on the whole use things like fixed size memory allocators (and potentially other constant time allocators) for things that need to be created / destroyed dynamically (see my first post on page 1). You can use this for constant time incredibly fast allocations / deallocations, suitable for things like nodes in algorithms, even particle type systems.>[/quote]Yeah, I agree. In my engine, if something needs to allocate memory, then I have to pass it an appropriate allocator -- new/delete/malloc/free are banned (globals are bad).
And I don't mean that I pass around some abstract "Allocator", or even a fixed concept allocator (like the C++ containers use) -- different systems will require different concrete allocators (which might have different interfaces and semantics). An algorithm that needs to temporarily build a large list internally might need be be passed a stack of bytes to use as scratch memory, a system that spawns monsters might need to be passed a monster-pool, etc...
My bread-and-butter allocator (kind of equivalent to shared_ptr+new in general C++ code) is just called Scope (and is used with a custom 'new' keyword) - it uses a stack-allocator internally, but any 'newed' objects are bound to the lifetime of the Scope object (like the "automatic" / non-heap variables that we're used to). You don't have to delete them and can't leak them -- they're destructed when the scope object is destructed. Scope objects are usually allocated inside other scope objects, which we should all be used to. I find this a much simpler, more efficient and less error-prone way to manage heap allocations than the traditional C++ solutions. The start of my game might usually look something like: MallocStack memory( eiMiB(256) );
Scope a( memory.stack, "main" );
eiNew(a, Game)(a, "foobar");
Edited by Hodgman

Share this post


Link to post
Share on other sites

stuff


Yup, that's pretty much how we ended up doing it too! Snap lol. smile.png I think it probably ended up as a malloc on a reserved heap on pc build for the level file, and just loading into the prereserved block on consoles.

For the streaming I think I had more chunks (I called them banks), maybe 8 or 16 something like that, then parts had the option to use e.g. 2 banks worth.

For deciding which banks needed to be loaded I used a PVS calculated from the artist's levels and portals, and a potentially loadable set derived programmatically from this. There were areas though I'm sure where the artists had overcooked it and they needed to put in visibility blocks of some kind. I think the tool chain alerted them to this. Worked a charm, especially with decent asynchronous streaming support.

Getting way off topic though there hehe! biggrin.png

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement