By your definition of a GC ("clean up resources so the programmer doesn't have to"), automatic reference counting used in Swift is also GC because it "just works", "compiler does it, not the programmer", yet you just said yourself that Swift has no GC.Destructors are deterministic garbage collectors in the sense that both automatically clean up resources when the program is done with them in a manner that the programmer doesn't have to directly manage (the compiler inserts calls, not the programmer).
Ok, yeah, caught red-handed by my own misuse of the word.
Should have said "Automatic resource management" which reference counting and garbage collection is a part of.
So I stand corrected. "Garbage collection" is a secondary process that scans for unused resources and cleans them up. (Not necessarily memory, but I don't know of any GC that cleans up non-memory resources - at least not directly)
From a performance standpoint, both GC and ARC (not a fan of that acronym, but it works) clean up resources. One does it in-line with code, one does it at some time later (hopefully when the CPU isn't busy).
For the record, I've used systems that added a "deferred deleter" to C++ because the cost of deleting everything at once was too much and it was much better to put the unused objects into a queue to be deleted later in small increments. (A very rudimentary version of GC)
I'm curious if you have you tried doing this on a large scale?I still contend that if you're going to use a GC language, then you need to play nice with the GC
I've spent an awful lot of time the last couple of years refactoring swathes of Java code (or porting it to C++), to reach the 'zero allocations' bar that one absolutely must hit if one wants a fluid and responsive android application. I'm not kidding about this - you will not be able to reliably hit 60fps if there are any allocations in your rendering or layout loops. And Java doesn't have struct types, so that means no short-lived compound types at all...
I use C# for small scale tools mostly, where I don't have to worry about the GC too much other then knowing how to handle unmanaged resources via IDisposable and "using". Also I tend to avoid generating excess garbage in the first place.
I have had to do one redesign in a real-time project to pool objects rather then allocating them.
So yes, C# leans towards "allocation and pointers". On the flipside, C/C++ leans towards "stack and copying" which has its own problems - most notably, if you take a very large object by value in a function parameter (which is the default in C/C++). Though that is fixed with only a few characters where as C# is more difficult to go the other way.
I'm trying to point out the argument cuts both ways. You can't say that "C# is slow by design" and ignore that the de-facto "high-performance" language (C++) also has slow features (i.e. virtual dispatch).
I would argue: "Everything on the heap" is at the core of C#. Virtual dispatch is not at the core of C++ in the same way.
Virtual dispatch is a pretty core feature of C++'s OOP mechanisms.
And I've seen some pretty ridiculous things written in C++ to avoid the cost (either of the dispatch or of the vtable pointer). For example, an array of small polymorphic objects stored by value where only the object holding the array knows the type of the objects and therefore has to manually do the pointer arithmetic, as well as looking up the correct "vtable" array somewhere else because the cost of adding a vtable pointer to each array element was deemed too high. (Which each type had to register function pointers to on startup)
As I already pointed out - C# does not require the heap for everything, and garbage collected (or at least loosely-pointed) memory has it's own advantages (memory compaction).
It doesn't, but then you're fighting the language. Trying to manage an array of value types comes with significant limitations in C#. True though, memory compaction is a potential advantage. And in practice, if you use an array of reference types and pre-allocate all the objects at the same time, they tend to end up sequential in heap memory anyway - so that does mitigate some of the cache performance issues.
Agreed.
I do not pretend to think C# does not have some bad design decisions (or at least "bad" in some sense of the word, as the designers of the language picked one option out of several and "performance" was not the top driving force, unlike C++).
I simply think that people who try to paint the entire language (or the entire swath of "high level" languages, whatever that means) as "slow" because "it doesn't do this specific thing as fast as this other language" is rather... misinformed. Or at the very least trying to start an argument. But hey, we now have this thread, so I guess they succeeded (well, this has been more a discussion then an argument)
Programmers have proven time and time again that they are more then willing to sacrifice raw "speed" simply so they can actually make a product that works and isn't a pain to maintain. Otherwise, again, we'd all be doing hand-tuned assembly.