The only reason why it appears to be working is that the finalizer is being run for the other classes because the program is terminating. In a game, this is unlikely to be the case unless you run GC.Collect() every loop (though even this wont help finalize some things like out of scope threads).
But it still runs when the GC runs. I already acknowledged that it won't be run immediately, but if you can guarantee that the GC will run on that object at some point, the finalizer will run and the object will be disposed, and there won't be a leak. Naturally, this will prove a problem if we want to actually recover from the exception, as the now potentially broken object will hang around until the next GC run. But it was never specified that the example given required whatever exception Danger throws to be recovered from, so I assumed that like almost all cases I've encountered, it wasn't, and the program terminating upon hitting an exception was desirable behaviour, and therefore I could take advantage of it. I personally prefer that my games crash on an exception even in C++, if only because their existence becomes much more apparent, usually causing me to notice them and fix whatever the problem is more quickly.
Understand, I'm not arguing in the slightest with the idea that C# is worse off for lack of full RAII, just that its "pseudo-RAII" is not as bad a situation as you're making it out to be given that a few caveats are observed and a few hoops jumped through. It's a pity stack types can't have destructors in C# - that would bring C#'s "pseudo-RAII" closer to actual RAII.
Better still, throw an exception just before the end of your program (just after the new ReadLine() position) and (in mono on x86 linux) the other dispose statements will never even be called. (This is an example of not exception safe).
This is quite true, and is the purpose of the try/catch in the Main() function of this test program. That try/catch block ensures that the program does not crash before the garbage collector runs for the final time. Again, if that is not desirable or possible, this all falls down at our feet, as I once again acknowledge. I've not seen many cases in C# where it is not desirable, however.
The only reason why danger is disposed is because it is in a "using" but unfortunately using only works within a block so can't help ensure "something" is cleaned up.
Interestingly, I've found that danger is NOT disposed by the using itself. If I remove the destructor/finalizer from the Danger class, I get this:
So clearly, that using never calls Dispose, because otherwise removing the Dispose() call in the finalizer would not change the output. I guess using blocks don't call Dispose() if an exception occurs in the constructor of whatever's being managed by the using block, which I must admit is a bit disappointing.
So in my opinion, the suggested ease of C# does not outweigh it's design flaw and so I will never recommend it to people in threads such as these
So you discourage beginners from using managed languages in general, I take it?
I wouldn't call it a "design flaw," personally. As has been mentioned, RAII and garbage collection don't really play nice together. If you're using a language with garbage collection, chances are very good that you don't actually care very much about exactly when your resources get released, even in exceptional conditions - you just care that they ARE released, at some point. In fact that's... kind of the whole point of garbage collection.
Edited by Oberon_Command, 11 September 2012 - 05:18 PM.