I'm going to go out on a limb here and argue that for, say, an int, there is no such thing as an insane value. 0 is no more a default value than 325434 or -26. On this basis, I don't see any advantage to initialising an int to 0 unless that happens to be the value I intend it to hold for other reasons.
The point isn't that int has an inherent invariant that's being preserved by initialization; the point is that your code's interpretation
of that int has invariants. Are you using the int to represent a counter? Initialize to 0. Are you using the int to represent a multiplicative accumulator? Initialize to 1. Are you using the int to represent the answer to life, the universe, and everything? Initialize to 42.
Outside of highly contrived demonstration programs, every variable has a purpose, and that purpose comes with baggage: namely, assumptions about what values "make sense" and where the value should start off.
Even in highly contrived demonstration programs, having a known reliable initial value is preferable to a totally unpredictable, undefined value.