Sign in to follow this  

Stack and Heap collide?

Recommended Posts

Programs is typically multi-threaded now. There isn't "the stack", but multiple stacks, and the stacks themselves can be allocated on the heap. And if they grow outside of their allocations, they can stomp over whatever is nearby (typically other stacks as you are likely to create several threads at the same time). Some systems will not detect stack overflows, but you are likely to crash somehow once this happens. Hope that whatever thread crashes first has a useful call stack left over.

Share this post

Link to post
Share on other sites

Someone please confirm if this is true, just using some logic here to draw conclusions out of nowhere...


32 bit applications probably will have a collision quicker than 64 bit ones, simply because the available address space is smaller than what would fit in RAM anyways.


On 64 bit, the OS  could start the heap from somewhere near the start, and the stack a few exabytes from it. Due to the paging done by the OS, the space inbetween doesnt cost anything (?).


If this is how it works, i would expect 64 bit programs to not be threatened by this problem at all.

Share this post

Link to post
Share on other sites

In Win32 processes, the stacks and heaps are distributed somewhat erratically throughout the address space.  x86 apps on Windows XP and higher typically have a layout like this:


0x001x0000: First thread's stack, typically has a meg of initial size and grows towards negative addresses if it exceeds that space (the 'top' page of the stack is marked with a guard page which causes additional virtual memory allocation to occur when it's accessed).

0x00400000: EXE's typical load address.


Everything else is distributed fairly randomly throughout process space;  DLLs, additional thread stacks, memory mapped files, thread-local storage blocks, heaps, etc.


When Windows grows a heap, it can allocate the new pages anywhere so that fragmentation doesn't cause out-of-memory errors as easily (heaps are not one large contiguous set of pages).  When Win32 stacks grow, they cause stack overflow exceptions if they would collide with any existing pages (whether it's a heap, the EXE, a DLL, or another stack - running into ANYTHING causes the stack overflow exception).  Stacks are typically preallocated with as much space as they need so that they can be allocated as one big block and (ideally) not need to grow any further.  The Win32 API lets you control the initial stack size for your EXE and when you make any new threads for this reason.



The important part to understand is that the language runtime and OS determine how the stacks and heaps behave, so this can be COMPLETELY different on other platforms like Linux or iOS.

Edited by Nypyren

Share this post

Link to post
Share on other sites

a stack overflow is little different in essence from an array overrun.


a thread stack may be located anywhere in the address space, likewise with parts of the heap (typically, the memory allocator will grab chunks of address space from the OS via "VirtualAlloc" or "mmap" as needed, then divide them up and commit pages and similar as needed, and grab more chunks when this space runs out, ...).


so, it is not like they are placed at opposite ends and grow towards each other or something.


typically, the space allocated for thread-stacks is fixed-size, and if the bottom end is hit, an exception will be thrown (typically resulting in a crash).


granted, some targets, namely Linux + GCC, have started experimentally using segmented stacks, where the initial stack is much smaller, but memory will be allocated elsewhere as the need arises (the stack-frame will essentially jump-ship to a new stack as-needed).

(this may require more looking into, I am not sure its current status...).



(note, from memory, may contain errors).


in general usually the memory map (for something like a Win32 virtual-address space, *3) ends up looking something like:

0x00000000-0x00400000: usually the main thread stack (and some free space, *1).

0x00400000+: main program EXE, followed directly by static application DLLs, ...

0x01000000-0x70000000 (approximate): used mostly for heap, thread-stacks, and dynamically loaded app DLLs;

0x70000000-0x80000000 (approximate): used mostly for OS DLLs (*4);

0x80000000-0xC0000000: free space for large-address-aware apps (used for more heap/...), otherwise inaccessible (*2).

0xC0000000-0xFFFFFFFF: inaccessible by userland processes, used by the OS itself.



*1: in the Win9x line, the low 4MB was apparently shared between applications and basically contained the real-mode MS-DOS address space, and various Win16 related data (corrected).

this shared space did not exist in the NT line (WinNT/2K, also WinXP and newer), and so is available for application use (ADD: hence the OS putting the main thread's stack there).


*2: in Win9x, this area was owned by the OS but accessible and shared between processes. this was generally used for inter-process communication, as well as for data shared between processes and the OS.


in the NT line, it was basically reserved for the OS and unused (resources were shared with the OS via alternative means, namely the OS could directly access memory located anywhere within the process, and so didn't need some dedicated shared/global area). this was later made available for applications (WinXP and Server 2003), via a special linker flag.


address regions are generally allocated at the lowest address for which sufficient free pages can be found, unless a flag is given in which case it will start at a high address and work downwards.


*3: each application process will have its own local address space layout, but some general patterns apply (as imposed by the OS)

*4: the OS prefers to map DLLs to the same location in every process when possible so that more pages can be shared between processes.



the 32-bit Linux memory map is simpler, more like:

0x00000000-0xC0000000: application binaries/heap/libraries/stack, spread around fairly haphazardly;

0xC0000000-0xFFFFFFFF: inaccessible by userland processes, used by the OS itself.


*: some builds of RedHat (and possibly other) leave a full 4GB for the user-land process, at the cost of more expensive system calls (making a syscall requires changing to a different address space).



in 64-bit Windows, it is more like:

0x00000000'00000000-...: application code/data/...

...-0x0000007F'FFFFFFFF: OS DLLs/...


typically, memory is allocated from low addresses and works upwards.



the OS's address space is then generally located at negative addresses...

or, seen alternatively, in the space just below:




Linux is basically similar, except once again the spread is fairly haphazardly.


memory address ranges are also normally allocated in an apparently more-or-less pseudo-random order.



or such...

Edited by cr88192

Share this post

Link to post
Share on other sites

You need updated sources.  Your sources are over 30 years out of date.  



Computer memory has advanced significantly since the old memory models. Any system that relies on virtual memory does not behave under that old model.


WAY back in the day, an application was given full reign of the memory space.  You could do whatever you wanted with it.  The application was responsible for all allocation with in that space.  The common instruction was to fill up the space in the method described in the OP.


Back in the 1970's *nix system moved away from that model.  The home consumer moved out of that model in the 90's when Windows started managing memory in a more modern way.



On all modern systems the application's view of memory is completely unrelated to actual physical memory. The stack is a block of memory that logically grows 'up' as data is pushed on to it.  The heap is a collection of memory blocks that are randomly scattered across your memory space. Neither one has any relationship to physical memory layout.




Unless you are developing on embedded systems or specialized hardware, the OS handles memory in a dramatically different way today.

Share this post

Link to post
Share on other sites

To be fair virtual memory requires processor support as well as operating system support (EDIT: to be efficient anyway, you can do it with a virtual machine of course), and it wasn't until protected mode (286's; although I think it was pretty clunky on a 286?) and OS's that supported it arrived (NT3.1, OS/2) that virtual memory made flat memory models obsolete/legacy for home PCs.

Edited by Paradigm Shifter

Share this post

Link to post
Share on other sites

FWIW: I was mostly writing about the virtual-address memory map within Win32, not about the physical/hardware memory map.


some amount of it is likely due mostly to legacy reasons (avoiding breaking apps which assume Win9x memory organization), ...

other things are mostly due to OS convenience.

Edited by cr88192

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this