Sometimes you truly only want one instancing of something running around (assert tracker, GPU wrapper etc)
This is misleading -- You may "want" only one, but it's very, very seldom that one cannot imagine a scenario where, in fact, you actually need more than one, and even more seldom when having more than one would actually be incorrect. Usually when a programmer says "I'll use a singleton because I only want one of these." what he's really saying is "I'll use a singleton because I don't want to bother making my code robust enough to handle more than one of these."
If you can justify choosing convenience over correctness you're free to do so, but you've made your bed once you've chosen a Singleton and you alone will have to lay in it.
...yeah, 'correctness' in software design; something that's largely considered undefinable. We are not scientist doing good science or bad science; we are craftsman and what one paradigm holds over the other is simply the kinds of coding decisions that are discouraged vs encouraged. The idea is to use designs that encourage use cases that pay productivity dividends down the road and discourage use cases that become time sinks. In this sense Singletons are ALL BAD but making your life easier down the road shouldn't be a directive that is pathologically perused such that the productivity of the moment is drawn to a crawl. I guess the point I'm trying to make is that a tipping point exists between writing infinitely sustainable code and getting the job done. Here are some examples:
Singletons make sense for something like a GPU; it is a piece of hardware, there is only one of them and it has an internal state. Of course you can write code such that the GPU appears virtualized and your application can instantiate any number of GPU objects and use them at will. Indeed I would consider this good practice as it will force the code that relies on the GPU to be written better and should extend nicely to multi GPU platforms. The flip side is that implementing all this properly comes at a cost in productivity at the moment and that cost needs to be considered over the probability of seeing the benefit.
Another example is collecting telemetry or profiling data. It's nice to have something in place that tells everyone on the team: "Hey, telemetry data and performance measurements should be coalesced through these systems for the purposes of generating comprehensive reports." A Singleton does this while a class declaration called Profiler and Telemetry does not. Again, you can put the burden of managing and using instances of Profiler and Telemetry onto the various subsystem of your application and once again this may lead to better code but if the project never lives long enough to see that 'better code' pay productivity gains then what was the point?
I don't implement Singletons either personally or professionally (for the reasons outlined by Ravyne and SiCrane unless explicitly directed to do so) but I have worked on projects that did use them and overall I was glad they existed as they made me more productive on the whole. In these instances the dangers of using other people's singletons in already singleton dependent systems never came to fruition and the time sink I made writing beautiful, self contained, stateless and singleton free code never paid off. Academic excellence vs. pragmatism: it's a tradeoff worth considering. Mostly I'm playing devils advocate here as I find blanket statements about a design paradigm being all good or all bad misleading. Anyway, this is likely to get flamey...it already is and people aren't even disagreeing yet. I'm out.