well, just observations, don't know all the specific reasons.It doesn't have much to do with the size of the codebase - it's a matter of the experience and calibre of the developers involved.
Did they establish a coding style? Did they establish a testing methodology? Did they provide adequate documentation? Have they performed profile-guided optimisation? Did they produce a comprehensive threat model?
These may not be the fun parts of 'programming', but when a company advertises a position for a 'C++ developer', they are looking for all of the above.
don't know as much about companies here, but more experience looking at FOSS stuff, and have often seen patterns like:
"lib<somefileformat>" or "lib<sometrivialtask>" as often having fairly hackish/poor code (exceptions partly being ones like "libjpeg" which are at least "much better than average").
then quality seems to generally go up along with code-size (like, better maintenance of conventions and abstractions, ...).
but, then sometimes there are large projects (notably some of the ones within the GNOME project), which seem to have some pretty poor coding practices (often code is written without a consistent style or maintaining abstractions, ...).
this is in strong contrast with, say, LLVM, which overall looks pretty clean (albeit sometimes slightly odd IMHO, looking almost Java-like at times), ... V8 also looked fairly good here. though, I didn't exactly look over all of the code in these projects.
despite some funkiness, also Mozilla looked pretty good overall (at least in the parts I looked at).
related is Quake 1 vs 2 vs 3 vs Doom 3, where in each case, the code got bigger, and also seemed to often generally get cleaner (though harder to directly compare Doom3 to its predecessors, due to being in a different language and apparently largely a rewrite of the engine).
of course, others may disagree or see it all as possibly unrelated (or I may be wrong due to only seeing small parts of the code in many of these projects, ...).