How to minimize memory consumption

Started by
10 comments, last by Spoonbender 18 years, 8 months ago
Any hints/tricks/links? What should I have in mind and what not? Im fairly good at high-level design but don't know much about the inner workings of modern C++ compilers. And before everyone starts screaming "RAM is cheap!" just take a look at the memory consumption of Gnome (and KDE is even worse). 11.8 MB in resident memory for a simple WEATHER APPLET!? Is it just me or is that an insane amount? I mean we are not talking about Unreal 3 here.... Thx // BBB
Advertisement
Make sure you delete everything you new? Stream big audio/video files? Use managers to make sure you don't load the same resource (e.g. textures or sounds) more than once?

If you don't do anything particularly stupid I'm not sure I'd worry too much about it.
Do you have a specific program in mind? If not, it's very difficult to give general advice. The only thing I can think of is to make sure you clear your containers after you're done with them (not normally relevant, but I'm working on a program centered around a recursive function right now so in my case it was a good idea to clear the containers that I was done with before making the recursive call), make sure you don't allocate more than you need unless you plan on needing more, etc.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:Original post by BBB
I want to avoid using 13 MB RAM for something as simple as a mini commandline
applet.


A command line applet in under 13 MB... impossible!!!! :-)

I support you 100%. I have 512 MB in my laptop, but for some reason the virtual memory manager is always thrashing around moving stuff to and from the hard drive. It annoys me to no end. Looking at the task manager right now, Outlook is taking up 35 MB of memory and IE is taking up 53 MB. Even though I have 6 web pages open at once, I feel that 53 MB is excessive.

I can't give any great advice. It all depends on what you're doing. What all is your mini commandline applet supposed to do? I can't see your overhead getting too big. Probably your biggest overhead are things you can't control too well - system resources. As long as you release resources as soon as your done, you will probably occupy no more than a handful of K more than if you'd dealt with things optimally.
In windows, a large section of the overhead comes from allocating system resources to open
up a window.
Console apps can avoid most of this overhead, but anything that needs a window to draw to takes 10+ megs of ram on average.
I've never had a problem with one of my programs taking up too much memory without some kind of memory leak, so I can't really offer any guidlines beyond obvious code-level stuff.

I think that a lot of IE's memory usage is image buffers so it doesn't constantly have to redraw the page you're looking at, but I'm not certain.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
I really hate bloat, but it is important to understand what you are measuring. The Task Manager "mem size" ("working set" as defined by Win32 API) can be misleading both ways - it includes shared DLLs and doesn't count swapped pages. Use perfmon instead to determine "private bytes" (nonshareable committed mem) and "virtual bytes" (address space) usage.
The latter is not a problem; your app has several hundred MB to burn before you start impinging on preferred DLL load addresses. A handy trick for resizeable arrays that do not waste *physical* memory but also do not invalidate iterators is to preallocate virtual address range and only commit the pages you need.

Quote:In windows, a large section of the overhead comes from allocating system resources to open up a window.

hm, doubtful. Call the top 2 culprits managed environments like Java or .net (*) and libraries.
I have written code (in assembly) that creates a window, sets up OpenGL, renders a 3d chess environment and plays back a famous game - for a grand total of like 1400 bytes. (this kind of thing tends make people saying "compilers do a better job" hold their tongue :) )


* Funny quote from some Slashdot discussion:
Quote:
> "I sometimes find distributing an extra 30 MB [the .net run-time] a bit inconvenient."

Please download my new folding bicycle! Its lightweight aluminum frame means it's easily portable and rugged too! when not biking, fold it up and carry it under your arm like a briefcase! The perfect form of personal transportation!*

* Note: lightweight bicycle application is powered by a Soviet-era nuclear submarine, a separate download.
E8 17 00 42 CE DC D2 DC E4 EA C4 40 CA DA C2 D8 CC 40 CA D0 E8 40E0 CA CA 96 5B B0 16 50 D7 D4 02 B2 02 86 E2 CD 21 58 48 79 F2 C3
General tips?...

- use a custom memory manager that has decent tracking facilities, preferably attaching file and line information to each allocation in "profile" builds. Every once in a while during development, take a snapshot of the memory state and dump it as something like a CSV file (so you can load it into a spreadsheet and similar packages). Sort the spreadsheet by memory size then file/line location and go through every allocation made in your application and ask "is this allocation sane?". This is a common practice I've used at various companies during development of console games where memory is severely limited.


- be aware of alignment of data types within structures:
struct{ char x; long y; char z; long w;};

versus
struct{ long y; long w; char x; char z;};

with some machine architectures, the compiler will place padding after the "char" types in the first example so that each "long" is aligned at a nice address (cache line, SIMD requirement, "all longs at an even memory address" requirement with some non-x86 CPUs). General rule of thumb: order structures by largest type first.


- learn the overhead(s) of your memory allocator. A general allocator usually has to have some way of knowing what memory is allocated and how large each allocation is, this is usually at least 4 bytes, and can be as high as 64 bytes or more! Once you know the overhead of an allocation, take another look at the dump of your application's memory state (see first point) - lots of little 4 byte allocs at load/initialisation time that you thought made sense suddenly start to look a bit nasty...


- related to the above, for final/release builds - make the structure of a file on disk *exactly* the same as the structure that file will end up in once its in memory, then just "fix up" offsets into pointers at load time. You potentially save lots of allocator overhead compared to picking at little portions of the file, allocating memory then copying small parts into structures in memory. You get the added benefit of significantly improving loading times (if done right).


- to get a small executable size, there are file stripper programs available which will remove all not strictly necessary parts from PE EXEs, ELFs etc. To get an even smaller executable size, you can use an EXE packer such as UPX which will uncompress the executable into memory at load time.


- to get a really tiny executable size (<=64kb), you need to start using ASM in at least some parts of your code as well as thinking about code and data re-use as well as generation; it's not something I'd generally recommend these days, but it is fun to do at least once, it'll be how that C64 stuff was done (many moons ago I used to be a demo coder, writing some 1Kb Amiga demos for fun was very educational).


- there are compiler options that can help you. Some compilers can remove all duplicate literal strings automatically, some have whole program optimised linking (compilers usually spit out a bunch of *separate* object files, the linker then blindly links them together - even if there's code that could be shared between them).

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Quote:Original post by Jan Wassenberg
[...]I have written code (in assembly) that creates a window, sets up OpenGL, renders a 3d chess environment and plays back a famous game - for a grand total of like 1400 bytes. (this kind of thing tends make people saying "compilers do a better job" hold their tongue :) )[...]
And I've written tic tac toe (with AI) in assembly using under 512 bytes, but that isn't the point.

The question is how much memory does your program use while running? Nobody cares about how much space the exe takes up - I've never seen one 'too big' unless you're moving stuff around on floppies still (in which case a compression tool will usually bring it down to size).
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:I want to avoid using 13 MB RAM for something as simple as a mini commandline applet.


It is my (possibly incorrect) understanding that stripping the executable reduces the size of the executable file, but not memory usage because what is stripped isn't loaded into memory anyway.

[google]"gcc optimize for size" returns "-Os"

Generally, unless you are linking huge libraries the size of your text segment (code in memory) is not going to be significant so you shouldn't fret too much about it. Your brain cycles are much better spent paying attention to the data you create.

Avoid tiny allocations. new/malloc 1 byte and you are also paying for the internal data structures that the memory system uses to keep track of your allocations -a lot more than 1 byte.

If memory really is a concern then you should pay attention to what libraries you link and how they affect your memory footprint. Beyond their code size, some libraries allocate large amounts of static data.

If you want to know more about the inner workings of C++ compilers, then check out the book "Inside the C++ Object Model". It explains how all of the C++ constructs work under the hood. Highly recommended.

This topic is closed to new replies.

Advertisement