stack overflows: why not turn the stack upside down?

Started by
9 comments, last by Paradigm Shifter 10 years, 2 months ago

We all know about stack overflows. You overwrite your buffer on the stack, and you are now trashing another buffer, or even worse, the return address. Then when 'ret' (I'm using x86 lingo here), runs, the cpu jumps to a bad place. This is because on most platforms the stack grows downward. When you exceed your stack frame's bouundary, you start tracking the previous stack frame.

This downward growth is mostly hisorical. On older computers, before virtual memory, your stack started at the top of RAM, and grew downwards. The heap started somewhere else and grew up. You ran out of memory when the two pointer met. With virtual memory, you can put the stack 'anywhere' and even grow it on a page fault in some cases. And I think it's time to turn the stack around:

Suppose f() calls g() and that calls h():


classic stack:  'R' is where the return address goes:
low addr  [empty-----------------R][--f--]   high addr
low addr  [empty------------[--g-R][--f--]   high addr
low addr  [empty-----[--h-R][--g-R][--f--]   high addr

How H overflows:

low addr  [empty-----[--h--*R*****][--f--]   high addr


we crash, or worse, run an explot from h inserting an return address to who-knows-where


upside-down stack:
low addr  [--f--][-empty------------------]   high addr
low addr  [--f--][R-g--][empty-----------]   high addr
low addr  [--f--][R-g--][R-h--][empty----]   high addr

Now H overflows:
low addr  [--f--][R-g--][R-h***********--]

We've overflowed into empty space.  Return address and previous stack frames are safe

This wouldn't be too hard to do. Most stack access during a function is done pointer arithmetic:

mov eax, [esp-4]

mov eax, [esp+4]

I know there are some hardware platforms that already have stacks that grow upwards. Redefining the ABI for x86 would break using standard libraries, but for inside your own application, this might enhance security. I suppose its even possible to use an 'upwards' stack for your application, and then when you call a 3rd party library, switch the stack pointer to a seperate area where you have a standard downwards stack defined. I imagine this would involve a lot of hacking around in gcc or llvm to make it work. In an open OS like linux, maybe you could recompile the whole system to use upwards stacks.

Just a thought. Downvote if it stinks!

Advertisement

The heap grows the other way though... stack at one end, heap at other, of addressable memory. When they meet in the middle it's Bummed In The Gob Time (R)

"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

Redefining the ABI for x86.


The ABI is partly defined by the hardware itself. call, push, pop, etc. explicitly modify ESP to go in a certain direction. Not only would a compiler have to use different opcodes, they'd very likely be less efficient.

Sean Middleditch – Game Systems Engineer – Join my team!

(Singing) "Whichever way the stack grows doesn't really matter to me!"


The heap grows the other way though... stack at one end, heap at other, of addressable memory. When they meet in the middle it's Bummed In The Gob Time (R)


This doesn't apply to many modern systems. Consider that each thread in a multithreaded system needs its own stack.
There's also the whole split stack thing that some runtimes can do (if the stack bounds are exceeded, new pages are allocated somewhere else and life carries on). This is mostly done for microthreads/coroutines that only need a single page of stack (usually) due to the performance overhead incurred by all the stuff necessary to actually use split stacks.

Sean Middleditch – Game Systems Engineer – Join my team!

Short version: it doesn't really matter which direction the stack grows in - you'll eventually get stack overflows or bugs/crashes either way.

---

We all know about stack overflows. You overwrite your buffer on the stack, and you are now trashing another buffer, or even worse, the return address. Then when 'ret' (I'm using x86 lingo here), runs, the cpu jumps to a bad place. This is because on most platforms the stack grows downward. When you exceed your stack frame's bouundary, you start tracking the previous stack frame.

Stack overflows and buffer overruns are not the same thing.

Buffer overruns can occur with any fixed-size buffer, anywhere in RAM - if the buffer is on the stack, an overrun can trash local variables and return addresses, but even if the buffer is on the heap, an overrun can still trash potentially anything, including other heap data, the stack, or the program itself - and on systems without virtual memory, other processes or even the OS - or even write to memory-mapped hardware.

The stereotypical stack buffer overrun, where the buffer overflows into the return address, in actually not caused just by the stack growing downward, but also the fact that the buffer write direction is upward. It can happen whenever a buffer is on the stack and the stack growth and buffer write directions are opposed.

A stack overflow, on the other hand, refers specifically to the situation where more data is placed onto the stack than the stack is intended to hold.

Now, here's the dirty little secret about stacks: most CPUs don't actually do anything themselves when the stack pointer overflows, and will happily allow the stack pointer to wrap around to the other side of memory. However, this means that stack overflows can cause difficult-to-track-down bugs, and there's no easy way to detect infinite recursion. The limited size of the stack is in almost all cases enforced by the OS, not the hardware, specifically to prevent overflow-related bugs and (potentially) infinite recursion.

This downward growth is mostly hisorical. On older computers, before virtual memory, your stack started at the top of RAM, and grew downwards. The heap started somewhere else and grew up. You ran out of memory when the two pointer met.

Not quite.

The stack starts wherever the program loader/boot code puts it, within the limitations imposed by the CPU's architecture and the system's memory map.

(For example, the NES' CPU has an 8-bit stack pointer, but a 16-bit address space; the upper 8 bits of a stack address are always 0x01, so the stack pointer can only point to address in the 0x0100 to 0x01FF range. The SNES' CPU has a 16-bit stack pointer, but a 24-bit address space; the stack pointer can only point to address in the 0x000000 to 0x00FFFF range. The Sega Genesis only has RAM in the address range of 0xFF0000 to 0xFFFFFF, so the stack has to lie somewhere in there.)

On many CISC CPUs (including the x86 family), the stack grows downward because that's how the CPU's stack instructions work (the PUSH instruction implicitly decrements the stack pointer, and POP implicitly increments it) - to fake your own stack makes pushing and popping far less efficient. An extreme example is the 6502 CPU, where changing the stack direction makes every operation involving pushing or popping take a bit over 5 times longer. (Push goes from 2 cycles to 11, return goes from 6 cycles to 31, and so on.)

However ,the "RISC-like" 680x0 series CPUs could grow the stack in either direction. And most RISC CPUs don't even have hardware stacks - stacks are implemented in software, and can grow in either direction.

With virtual memory, you can put the stack 'anywhere' and even grow it on a page fault in some cases.

In fact, this is exactly what modern OSes already do.

---

The only real difference with a stack growing upwards is that it makes is trivial to implement an upward-growing buffer on the stack, which is something you really shouldn't be doing anyway. (I'd even go so far as to say putting buffers on the stack at all is something to avoid.)

(I'd even go so far as to say putting buffers on the stack at all is something to avoid.)

I like this.

I also like the idea of seperate data and program control stacks. I've worked with FORTH, and there's two parallel stacks. That wouldn't be hard to do with x86 either. Call and ret can use the hardware stack, and it's downward growth wouldn't be a concern. Then argument passing and temporary data would be on a completely different stack, one that has nothing to do the 'esp' stack pointer. Actually, the data stack could just be managed using the 'ebp' pointer; you don't really need a base pointer anyway. You sort of do to make debugging easier, but you can get by without it.

The ABI is partly defined by the hardware itself. call, push, pop, etc. explicitly modify ESP to go in a certain direction. Not only would a compiler have to use different opcodes, they'd very likely be less efficient.

On x86 I don't think it would be too bad, at least for push/pop. This page puts them at about 1% of each other. http://qinsb.blogspot.com/2011/03/x86-register-save-performance.html

Doing your own call and return might be a little bit of a hit, but I hope most of the program's time is spent doing something other than function call overhead.

But, I've found something that makes the whole thing moot. Having an upwards growing stack is not the panacea of stack buffer overruns. from

the tl;dr is that you can allocate a buffer too small on the stack, and then call a function that's supposed to write into that buffer. As it writes into the previous frame's buffer, it will blast past the buffer and trash its own, including the return address.

(I'd even go so far as to say putting buffers on the stack at all is something to avoid.)

I also like the idea of seperate data and program control stacks. I've worked with FORTH, and there's two parallel stacks. That wouldn't be hard to do with x86 either. Call and ret can use the hardware stack, and it's downward growth wouldn't be a concern. Then argument passing and temporary data would be on a completely different stack, one that has nothing to do the 'esp' stack pointer. Actually, the data stack could just be managed using the 'ebp' pointer; you don't really need a base pointer anyway. You sort of do to make debugging easier, but you can get by without it.

Having your own stack, separate from "the stack" is quite common in the game engines that I've used. It's easier to catch bugs, because you can verify stack contents yourself (e.g. memsetting and memcmping regions with marker patterns at key moments), and if there are bugs you don't have the traditional problem of "I'm trying to decypher this crash dump, but I can't get a call-stack because it's corrupted". It's also ridiculously fast to allocate from (when you strip all your debugging code from the shipping build) and fast to deallocate (opting in to stack-unwinding when required).

I wrote a bit about my implementation here: http://eighthengine.com/2013/08/30/a-memory-management-adventure-2/

Stack overflows are actually underflows, since the stack pointer goes down, not up. It was specifically designed that way because it's easier to let a program crash when it reaches the underflow, than to add a check for an upper-limit of the stack.

If you were to use an increasing stack pointer, you'd still need a way to detect overflows, otherwise you're just inviting hackers to use the stack for buffer-overflow type attacks.

So the "downward growth" that causes underflows is not just a historical side-effect - it is a safety feature.

Also, I think older programs (DOS 16 bit programs) used segments for memory access, which were the same thing as virtual memory - they certainly did not access memory using physical addresses like "the top of RAM". The stack and data segments could be placed anywhere in memory below 640K (and later 1MB with virtual x86, but never higher than that). The heap was only later introduced, and the heap memory addresses were served by a memory-manager, which usually allocated it's memory above the 1MB mark (and below,in the "upper-memory blocks"), where the stack could not reach, so the stack and heap would never intersect. The stack was limited to the size of a segment - 64k. The heap was limited to the size of extended memory + upper-memory. So this is not true: "You ran out of memory when the two pointer met". Until virtual memory came along, there was never a moment when the stack and heap shared the same address space.

What is the actual problem you are trying to solve here? Running out of stack space seems the only one to me that would not be masking a program error of some kind. What makes you think that the default stack size on whatever hardware you are using is a problem?

This topic is closed to new replies.

Advertisement