I suppose you're right. For testing, you need determinism. I've long considered the implication that a different version might break a game, so you must include the version with which the game is know to work. It's also been evident that most games distributed on read-only media have all code that they use stored in one place, so two games that use the same engine will have a copy of the engine with each.
From a professional QA perspective, that kind of environment isn't feasible. When shipping a game, QA needs to be able to reliably reproduce the same binary distribution on every machine (e.g. if a bug comes in from a user, you need to be able to replicate the user's installation on one of your PCs).
In a world where one of your dependencies is installed into some shared system directory, and is outside of your control, you basically just can't do QA on the game.
So in order to be able to be in control of the quality of the product you're selling, you need to bundle all of your dependencies into the game's installation, whether that be putting the .dll/.so files in the same place as the executable, or statically linking those libraries into the executable file itself. In this case, there is going to be a shared compiler between all components, so you (the game author) can update DLLs with new versions.
I understand the system-wide library is a popular methodology, especially on Linux systems... but a big studio would never be able to release a game using that model.
I presume that this is the better way to do it. Also, dynamically linking is a pain, and it is yet another opportunity to worry about the program failing by worrying about whether or not a symbol was resolved.
Some of the main reasons that I consider C++ are no longer having to manually call constructors and destructors, templates, operator overloading, default parameters, having member functions, and being able to worry less about validating parameters.
In that case... If you're writing "good" C++ code, then you should be using std::vector, and shared_ptr and unique_ptr to do your memory management... but in the embedded case, then the oft-criticized "C with classes" style of C++ can actually be useful.
The old PS2-era engines that I worked with were all written in C++, but in a very C-style of it (no usage of std::*, no constructors/destructors, etc).
By validating parameters, I mean constantly checking if the pointer to the object upon which I'm operating is NULL, or checking if objects that I pass by reference are NULL. The this pointer should be assumed to be valid, from what I've seen, and passing using the reference operator in C++ is a lot more reassuring that the other object isn't going to be a NULL pointer.
I shy away from malloc()/free() as well; much of my object's constructors accept a pointer to an optional block of preallocated memory, so most objects that I use are allocated on the stack to be reclaimed when I go out of scope. I try my hardest to allow it to be possibly stack allocated, because if I don't heap allocate anything, I don't need to manually call a destructor, as the object will destruct itself. However, having a destructor would be a great relief, so I can have cleanup code there, and not have to watch every place that I might exit the function and put a destruct call there for all constructed objects.
Personally, I still think that RAII and proper use of the rule-of-three, and constructors/destructors is key to good C++ code in any style (whereas the typical "C with classes" style usually shuns these C++ concepts, I'd still recommend using them), but for embedded systems, I do shun pretty much any part of std::* that deals with memory.
However, the point I was making before is, I also shun C's malloc/free in these situations.
How do you pull off having a custom new? I'm concerned that others' modules that I might use would rely on new having default behavior; I realize that a module should free its own memory, all the same.
In my engine I use a custom new keyword, which uses a "scope" allocator, which internally uses a stack allocator. The vast majority of the engine is scope/stack allocated in this way (I can count the malloc calls on one hand), and although this type of allocator doesn't support random free/realloc semantics, it instead makes allocations almost free, almost eliminates fragmentation, removes leaks like RAII smart pointers (but without the burden of ref-counting or GC), still respects C++ destructors, and makes your memory allocation patterns extremely predictable, which is great for RAM-constrained situations.
For me, the amount of manual constructor and destructor calls is enough for me to consider porting my years in the making engine and libraries to C++. All other features are immensely useful, and would help me write programs faster, but object construction, destruction, and validation are tedious, and wear away at my morale.
I have noticed that there are only a couple of pure C games engines so perhaps it isn't so popular because most game libraries are implemented in C++ without any C interfaces?