Lines of code deleted is clearly a useful metric
I don't think even that is good beyond question -- you can add lines of code and still reduce complexity, likewise you can take lines away and still increase complexity; at least, if we're talking about complexity as the mental load required to understand it. If reducing the number of lines of code in a program is good, I suggest to you that the increased quality thereof has less to do with the number of lines left, and everything to do with the fact that while we trust any junior developer to *add* significant amounts of code, usually only more-experienced devs are entrusted to *remove* significant amounts of it -- or, at the very least that the act of removing lines of source code necessarily requires a more-complete understanding of both the problem and the existing solution. In other words, the second-whack at the problem is better because of better understanding, and that it might result in fewer lines is a symptom, rather than the cause (same as that it might result in more lines).
We reason about code at several levels -- at a systems-scope where we need to reason about how our processes interact with other processes, at the global scope, where we have to reason about how each module in our code interacts with rest of the whole, at the module level where we have to reason about the sub-components of our module working together or how our modules interact with specific other modules in isolation from the rest of the system, at the class, function, or even smaller levels. The goal of good code is that, at each level, exactly the information you need to reason about those relevant things is clear -- not more, not less -- that's the ideal. I don't want more code, I don't want less code -- I want exactly the right amount of code, functions, classes, modules, and binaries to facilitate that understanding.
That said, sudden swings towards what seems like too many or too few lines of code, especially at inappropriate times in a programs life-cycle can certainly indicate code and design-quality issues. A sudden ballooning of source code might indicate, for instance, an over-reliance on inheritance vs. composition -- but that's really indicated by the first-order derivative of LOC, not LOC as a purely quantitative measure, and this is also usually better tracked and reasoned about at the module level, not at the whole-program level. I submit that a graph of LOC per-module, over time is far more informative than knowing total line-count at any given time. Furthermore, you need to know where to look for best results -- if you're scoped too broadly its really difficult to separate things that should concern you from totally normal background noise, and likewise too narrowly and you will miss issues altogether.
[EDIT] I probably should soften my stance a little -- what I think we all mean to say in one way or another is that all quantitative measures of our source code -- LOC, numbers of macros, loops, functions, classes, modules, dependencies, etc -- can all provide insight into what's going on with your code base if you look at the data in the right way, and with knowledge of the design transformations that are happening in time with those measurements. These and more can be useful *metrics* that inform hypotheses about potential code/design smells that can be validated or debunked through investigation or testing. What any of these things are not, is any kind of quota that we should derive badges of honor directly from.