project complexity is not nearly so easily estimated as some people seem to think it is;
time budget is, similarly, not nearly so easily estimated.
there is a common tendency to think that the effort investment in a project scales linearly (or exponentially) with the total size of the codebase.
my experiences here seem to imply that this is not the case, but rather that interconnectedness and non-local interactions are a much bigger factor.
like, the more code in more places that one has to interact with (even "trivially") in the course of implementing a feature, the more complex overall the feature is to implement.
similarly, the local complexity of a feature is not a good predictor of the total effort involved in implementing the feature.
like, a few examples, codecs and gameplay features and UIs:
codecs are have a fairly high up-front complexity, and so seemingly are an area a person dare not tread if they hope on getting anything else done.
however, there are a few factors which may be overlooked:
the behavior of a codec is typically extremely localized (typically, code outside the coded will have limited interaction with anything contained inside, so in the ideal case, the codec is mostly treated as a "black box");
internally, a codec may be very nicely divided into layers and stages, which means that the various parts can mostly be evaluated (and their behavior verified) mostly in isolation;
additionally, much of the logic can be copy/pasted from one place to another.
like, seriously, most of the core structure for most of my other codecs has been derived via combinations of parts derived from several other formats: JPEG, Deflate, and FLAC.
I originally just implemented them (for my own reasons, *1), and nearly everything since then has been generations of copy-paste and incremental mutation.
so, for example, BTAC-2A consisted mostly of copy-pasted logic.
*1: partly due to boredom and tinkering, the specs being available, and my inability to make much at the time "clearly better" (say, "what if I have something like Deflate, just with a 4MB sliding window?...", surprisingly little, and trying out various designs for possible JPEG like and PNG like formats, not getting much clearly better than JPEG and PNG, *2).
*2: the partial exception was some random fiddling years later, where I was able to improve over them, by combining parts from both (resulting in an ill-defined format I called "NBCES"), which in turn contributed to several other (incomplete) graphics codecs, and also BTAC-2A (yes, even though this one is audio, bits don't really care what they "are").
(basically its VLC scheme and table-structures more-or-less come from NBCES, followed by some of my "BTIC" formats).
the inverse case has generally been my experiences with adding gameplay, engine, and UI features:
generally, these tend to cross-cut over a lot of different areas;
they tend not to be nearly so well-defined or easily layered;
as a result, these tend to be (per-feature) considerably more work in my experience.
I have yet to find a great way to really make these areas "great", but generally, it seems that keeping things "localized" helps somewhat here;
however, localization favors small code, since as code gets bigger, the more reason there is to try to break it down into smaller parts, which in turn goes against localization (in turn once again increasing the complexity of working on it).
another strategy seems to be trying to centralize control, so while the total code isn't necessarily smaller, the logic which directly controls behavior is kept smaller.
like, the ideal seems to be "everything relevant to X is contained within an instance of X", which "just works", without having to worry about the layers of abstraction behind X:
well, it has to be synchronized via delta messages over a socket, has to make use of a 3D model and has tie-ins with the sound-effect mixing, may get the various asset data involved, ...
it is like asking why some random guy in a game can't just suddenly have his outfit change to reflect his team colors and display his respective team emblem, ... like, despite the seemingly trivial nature of the feature, it may involve touching many parts of the game to make it work (assets, rendering, network code, ...).
OTOH, compiler and VM stuff tends to be a compromise, namely that the logic is not nearly so nicely layered, but it is much more easily divided into layers than is gameplay logic.
say, for example, the VM is structured something like:
parser -> (AST) -> front-end -> (bytecode)
(bytecode) -> back-end -> JIT -> (ASM)
(ASM) -> assembler -> run-time linker -> (machine-code)
it may well end up being a big ugly piece of machinery, but at the same time, has the "saving grace" that most of the layers can mostly ignore most of the other layers (there is generally less incidence of "cross-cutting").
so, in maybe a controversial way, I suspect tools and infrastructure are actually easier overall, even with as much time-wasting and "reinventing the wheel" they may seem to be...
like, maybe the hard thing isn't making all this stuff, but rather making an interesting/playable game out of all this stuff?...
well, granted, my renderer and voxel system have more or less been giving me issues pretty much the whole time they have existed, mostly because the renderer is seemingly nearly always a bottleneck, and the voxel terrain system likes eating lots of RAM, ...