The GC may be amazing, but why is barring you from having any control an amazing feature? Wouldn't it be nice if you could choose to opt in to specifying the sizes of the different heaps, hinting at good times to run the different phases, specifying runtime limits, providing your own background threads instead of automatically getting them, etc? Would it harm anything to allow devs to opt in to that stuff? Do the amazing features require the GC to disallow these kinds of hints?
You can do a lot of that now. You've always been able to hint that a new collection should be run (which you might want to do right after loading a new level, say). Newer versions of .NET let you disable the GC and turn it back on again for sections of your code, so you could leave it disabled for your main loop and then flip it back on again during level load. There are different levels to this feature as well; you can set it to *never* run, or set it to "low latency", where it almost never runs unless you get critically close to running out of memory. You can also manually compact the LOH, letting you choose good times to reduce fragmentation.
If you want even more control, like taking full control of thread scheduling of the GC or setting size limits, you can host the CLR, similar to how Unity works. There are a crazy amount of knobs to tweak there.
Of course, the simplest advice that avoids all of this is what it has always been in both the managed and native worlds: during the main loop of your game, don't heap allocate. Not necessarily easy, but simple to understand. It's certainly easier to do in C++, but also doable in C# (in fact, it was almost a hard requirement that you do that for Xbox Arcade XNA games, since the Xbox's GC was pretty crappy). Unlike in some other managed languages that will remain unnamed, the .NET CLR supports value types, so you can with just a bit of effort cut down heavily on the amount of garbage you're generating.
For the times you absolutely need heap allocations but really need to avoid the managed heap, you can always just *allocate native memory* anyway! There's nothing stopping you from malloc-ing some native memory blocks and doing your work there. I do this pretty commonly in my own projects for certain classes of memory where I need explicit control over lifetime or alignment.