Sign in to follow this  

Catch a segfault

This topic is 4166 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello. When programming under Linux with C++ I have trouble dealing with segfaults. Normal try and catch statements don't seem to work. Apparently Linux doesn't use SEH and the best advice I can find is to use "resource aquisition is initialisation" but this doesn't work either. The test program
#include <iostream>

class A {
        public:
                ~A(){
                        std::cerr << "Closing.\n";
                }
};

void a();

void b(){ a(); }

void a(){ b(); }

int main(){
        A tmp;

        a();
}

never calls the destructor for "tmp". So what's the trick?

Share this post


Link to post
Share on other sites
There isn't :)

Segfaults are faults when your program makes a critical error that it can't recover from. Exceptions are explicitly thrown, therefore you can catch them. Segfaults however just crash your program.

In your case it would probably cause a callstack overflow. The only thing you can do to prevent this is to don't write stupid code like that ;) :)

It is one of the drawbacks of C / C++, you will have to deal with it. One way to debug it is to output markers (printf("MARKER 1")) and try to locate the piece of code that creates the segfault, and isolate it and see what goes wrong
Usually it is some sort of out of bound or overflow error;

Greetings...

Share this post


Link to post
Share on other sites
The segfault above is from exhausting the stack. With exceptions the stack unwinds. This is only critical if the topmost exception handler itself exhausts the stack.

<tangent>Incidentally, anyone know why the above code doesn't start thrashing the swap file?</tangent>

Share this post


Link to post
Share on other sites
Quote:
Original post by walkingcarcass
<tangent>Incidentally, anyone know why the above code doesn't start thrashing the swap file?</tangent>


Why would it thrash your swap file? The only memory you're using is allocated on the stack, and that (most likely) should never be in the swap file, it would be too slow for execution to write to disk for every push/pop of the program stack.

Share this post


Link to post
Share on other sites
Since it's used a lot, the stack should surely stay resident in memory and have good locality, so it shouldn't slow to a crawl. The bottom of the stack should get swapped out is what I mean. I'd rather have a performance hit than a segfault. How is it worse than swapping out parts of the heap?

Share this post


Link to post
Share on other sites
The stack has a fixed size. If you have a huge stack, you have a problem in your code, like an infinite recursion. If you have a huge heap, on the other hand, that could be something as sinister as an infinite loop of news, or as benign as a movie file.

Therefore, if you run out of stack space, it's better to crash and let the developer know there's a problem, rather than silently enlarge the stack and slow to a crawl.

Share this post


Link to post
Share on other sites
Quote:
Original post by walkingcarcass
Hello. When programming under Linux with C++ I have trouble dealing with segfaults. Normal try and catch statements don't seem to work. Apparently Linux doesn't use SEH [...]


IIRC, seg-fault in linux is delivered by signal, not by exception.

Quote:
Original post by walkingcarcass
The test program*** Source Snippet Removed ***never calls the destructor for "tmp".


Default signal handlers are implemented in C, and they are mostly just exit()-ing, with no regard to RAII whatsoever. Unfortunately, I am not a linux expert, so you'd have to dig a little deeper on this by yourself.

Share this post


Link to post
Share on other sites
Quote:
IIRC, seg-fault in linux is delivered by signal, not by exception.

That's correct. Catching the SIGSEGV signal is the *nix equivalent of catching the SEH exception. For information on doing that, look here.

Share this post


Link to post
Share on other sites
Quote:
Original post by walkingcarcass


<tangent>Incidentally, anyone know why the above code doesn't start thrashing the swap file?</tangent>


Under linux the stack segment is 256 mb by default, however you only use 2 mb as your stacksize which is fixed as already mentioned

In general the maximum segment size is 256 mb under linux which means you shouldn t be able to allocate more than 256 mb with the new operator or malloc

To get a larger chunk of memory you either have to recompile you kernel with adjusted header files or you memory map (man mmap) a file.

As for segfault detection there are tools like valgrind and efence
two powerful tools.

At university our projects have to compile fine with both valgrind and efence otherwise you get zero scores ^^

Share this post


Link to post
Share on other sites
You can write a signal handler for SIGSEGV. And that signal handler (probably) can throw an exception. But I am not confident that it would work.

That is because a SIGSEGV can be thrown anywhere, not just in a "normal" piece of code. For example, SIGSEGV might be thrown when returning from a function because the stack was corrupt - throwing an exception would not help there because the stack would still be corrupt as it was unwound, resulting in chaos.

Most C programs I know which catch SIGSEGV use longjmp to jump to a cleanup handler. I don't think it's really feasible to handle it in a correct C++ way, as the CPU might be in the middle of some operation that it couldn't safely throw an exception from.

Mark

Share this post


Link to post
Share on other sites
Quote:
Original post by Erzengeldeslichtes
The stack has a fixed size. If you have a huge stack, you have a problem in your code, like an infinite recursion. If you have a huge heap, on the other hand, that could be something as sinister as an infinite loop of news, or as benign as a movie file.

Therefore, if you run out of stack space, it's better to crash and let the developer know there's a problem, rather than silently enlarge the stack and slow to a crawl.


My stack could be large because of local variables, not recursion. Doesn't seem fair to penalise me for that. You might as well print an error "Warning, this program has consumed X bytes of stack" and continue as normal.

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
At university our projects have to compile fine with both valgrind and efence otherwise you get zero scores ^^


Oh, can Valgrind be fun. Try running NACHOS from it, and scream. But even rather simple things like pthreads show fun reactions.

Anyway, when you have a SIGSEGV, the best you can try to do is die gracefully with as little data loss as possible, since after that point you shouldn't make assumpions about the state of your address space.

Granted, in most cases you will just have fallen over a NULL pointer.

Quote:
walkingcarcass
You might as well print an error "Warning, this program has consumed X bytes of stack" and continue as normal.

How would you continue as normal? When the stack is full, every function call will fail, unless you unwind some levels. You can't just do that without killing the intended control flow. You could as well copy some stuff from /dev/random into your text segment and try to run that.

Share this post


Link to post
Share on other sites
Quote:
Original post by walkingcarcass
Quote:
Original post by Erzengeldeslichtes
The stack has a fixed size. If you have a huge stack, you have a problem in your code, like an infinite recursion. If you have a huge heap, on the other hand, that could be something as sinister as an infinite loop of news, or as benign as a movie file.

Therefore, if you run out of stack space, it's better to crash and let the developer know there's a problem, rather than silently enlarge the stack and slow to a crawl.


My stack could be large because of local variables, not recursion. Doesn't seem fair to penalise me for that. You might as well print an error "Warning, this program has consumed X bytes of stack" and continue as normal.



If you intentionally have several megs of local variables, you have a bigger problem than just a recursion. One might call it a PBKAC error.[ignore]

Variables on the stack are supposed to be small and quickly accessed. Your big movie files or whatnot are supposed be allocated to the heap. If you violate this, the compiler and linker does not guarentee your program will remain functional. If you follow this, you'll need millions of local variables before you start running out of space; with 2 megs you could have half a million pointers on the stack before you start running out of stack space.


But if you really feel that you want to turn the stack into a slower, resizable monster, go ahead. Just because no one else has done it doesn't mean you shouldn't try.

[Edited by - Erzengeldeslichtes on July 20, 2006 12:19:47 PM]

Share this post


Link to post
Share on other sites

This topic is 4166 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this