One has to realize what "undefined behavior" means to compiler implementers. For every situations, the standard may essentially say one of three things.
Mandate specific treatment.
Require implementation-defined behavior.
Leave behavior undefined.
In the first case, implementers must properly detect the situation and follow the requirements to the letter. If the standard states that something is an error, the compiler must properly flag that error and reject the program as ill-formed. This category obviously represents the bulk of the document, given that the intent is to specify the language. [smile]
In the second case, implementers must still properly detect the situation, but may treat it however they want. Take RTTI type names, for example: typeid(something).name() -- the standard mandates that this must returns a string uniquely describing the type of something, but the actual strings are implementation defined. GCC and Visual C++ each return different strings. This category has significant implications on code portability. You shouldn't hard-code such type names, just like you shouldn't hard-code type sizes, nor rely on a specific number representation ... unless you are willing to restrict code portability to a specific version of a specific compiler.
Which leaves us with undefined behavior. Either explicit, when the standard states that (I paraphrase) "dereferencing an invalid pointer has undefined behavior", or implicit when the standard just doesn't mention some issue.
What it means to the implementer is "don't bother about that case". They do not have to write code to treat it. Nor even to detect it. In fact, in quite a few cases, it might just be impossible to even detect that something bad happened, or detecting it would completely change the nature of the language (giving rise to, e.g. C++/CLI, where invalid pointers do not happen), be too slow, etc.
Or, to rephrase that, "you can assume this will never happen, users are not supposed to do this. If they do, it's their problem, not yours".
If you were to write a specification for most beginners' C++ programs, in quite a few places, you would end up specifying "FOO has undefined behavior". Why? Because they didn't do error checking and whatever happens when you do "FOO" depends on whatever default code paths get followed in such a case.
Think, for example, about what happens when you write code intended to read in a number, but somebody writes text instead. If the program specification had said "Entering text here is an error", the program would be required to do error checking. If the program specification says "Entering text here has undefined behavior", then anything goes.
Now, that problably does not mean that the program is going to go through your hard drive and erase all your files -- unless that undefined behavior happens withing a file manipulation routine or unless the implementer was deliberately malicious, but what happens truly depends on how the program was written. Were the program to erase your files, it would still be correct according to the specs. You probably don't want nuclear missile control systems that rely on undefined behavior. Starting WW3 would be a possibility (and have fun telling people that it is perfectly acceptable behavior according to the specs).
Consider what happens (undefined!) when you write past the end of an array. The OS might stop you (unless you *are* writing the OS, in which case you're on your own, pal), you might overwrite part of your application's code, overwrite other variables, corrupt your stack. These are all things that are external to the C++ specification itself: you might not have an OS, your architecture might be weird... Heck, in fact, precisely what happens depends on how the CPU will map the values in memory to opcodes. None of this is something that is under the control of the compiler writers. Nor should they be.
Take another commonly misintepreted C++ issue that has undefined behavior: polymorphically destroying an object which has a non-virtual destructor: Base* ptr = new Derived; delete ptr;. You often see people claiming that it will only call the Base destructor. That may effectively be what happens because of how classes are implemented in your compiler, but that might change from one version of the compiler to another. It might even change if you modify the compilation options.
Now, compiler writers are (mostly) rational beings, and with a bit of knowledge about data structures and C++ specifics, you can probably infer how they are doing things internally. This can lead toclever user hacks, which are unportable abominations from a theoretical point of view, but may be reasonable in practice -- because compiler writers are not completely insane, nor malicious, and though behavior is unspecified, common problems generally end up being solved in similar ways (at least until someone does something clever).
Deliberately relying on undefined behavior in your program is bordering insanity. You are, after all, reverse-engineering the compiler. But relying on undefined behavior because you can't be arsed to do things correctly is just plain irresponsible.