Jump to content
Posted 22 October 2006 - 07:41 PM
Posted 23 October 2006 - 01:11 AM
Posted 23 October 2006 - 01:42 AM
Posted 23 October 2006 - 05:21 AM
Quote:On the bright side, when it comes down to that (which it sometimes does), you can break out the C++/CLI and write that SSE code, but have it directly callable from the C#. (Or for non-WIndows platforms, you can use a C API shared object and PInvoke to it.)
Original post by C0D1F1ED
There's one important exception that I really have to mention though. When the C++ version contains some assembly blocks using highly specialized instructions, there is no way C# can match it. For example Doom 3 has some SIMD optimized functions, and they are two to three times faster than optimized C++. Since C# doesn't support assembly in any way (except in external libraries) and automatic vectorization is an utopia, there will always be 'C++' applications that outperform C#. Even when not using assembly explicitely, C++ gives a better control over what code is generated.
Posted 23 October 2006 - 10:27 AM
Posted 23 October 2006 - 10:58 AM
Original post by Anonymous Poster
I would be interested to have _REAL_ experience returns about GC during game loop.
Is it perceptible ? Do you think that for a big game engine, after a long time of playing, it would become noticable and evenmake the game less playable ?
Cause except that, execution speed seems only modified by a constant factor, and we all know that it's not _THAT_ important (to not say meaningless)... The only thing that keep me from switching to C# is the fear to have (let's project in the future) 16 threads on a multicore CPU awaiting constantly on one thread doing the slow GC every x sec.
Posted 26 October 2006 - 07:48 PM
Posted 26 October 2006 - 08:21 PM
Quote:A GC is always triggered by an allocation, and so it happens in the context that attempted the allocation.
Original post by Anonymous PosterThe only thing that keep me from switching to C# is the fear to have (let's project in the future) 16 threads on a multicore CPU awaiting constantly on one thread doing the slow GC every x sec.
Posted 27 October 2006 - 02:27 AM
Original post by BlueHabu
You are right you saidQuote:Since this is "C++ or C#?" thread what are you comparing if not C++ performance to C#?
Original post by Promit
Performance would probably be (significantly?) better in C#.
On a side note: Does anyone not want to see Promit prove C# is faster than C++?
Posted 27 October 2006 - 10:32 AM
Posted 27 October 2006 - 04:06 PM
Original post by Promit
A GC is always triggered by an allocation, and so it happens in the context that attempted the allocation.
There is no garbage collecting thread. None at all.
Now, the danger is that a GC takes out a lock, so if another thread tries to perform an allocation during a GC, it will block until the GC has finished. Then again, Solaris is the only system I know where malloc doesn't take an exclusive lock on the heap. And a gen0 GC (the only kind that should happen regularly) is really fast. So in .NET an allocation risks incurring a slightly long GC that may block other threads; in native code, every allocation is guaranteed to incur a slightly long linked list traversal that may block other threads.
Posted 27 October 2006 - 06:32 PM
Quote:Yes. I know the architecture of the Java GC (it's very nearly identical to CLR) but haven't actually looked up the trigger conditions. I'll take a look some time in the next couple days.
Original post by Raghar
So you are saying a "new" has nondeterministic duration? That's bad news, for example Java programmers are used to virtually free "new".
Quote:Well the C allocator's also going to execute non deterministically, and it also takes that lock. Except for pathological cases (which seem to be more prevalent in server applications), .NET's memory system performance is fantastic.
Still probably better than C++, if not for that mutex.
Quote:I don't know what you're referring to with the stacks. Nothing's stopping you from implementing object pools, however. So if you want to use pools to reduce the amount of allocations happening, go ahead. Beware, however, that your perceptions of what constitute bad memory behavior may not be accurate. The CLR Perf guys have indicated that for games, we can allocate about 500K per frame (30 MB/s) of extremely short lived objects without seeing any performance effects at all. I don't know about you, but I was absolutely amazed at how large that figure is. That gives us a lot of memory to play with. 500K/frame. Damn. I can't even figure out what I'd do with that much.
Could C# use multiple stacks, and proper preallocation of memory? What about on stack replacement?
Quote:Code executing under the CLR must have the FullTrust permission to call into native code, but if it has been granted FullTrust, it can do whatever it wants. Marshaling costs can be dangerous, as can security checks, so depending on the complexity of the call, you usually want the native code to be a "long" operation.
Does it have a protection for access into native code? Could programmer choose between fast access into native code, and safe access into native code?
Quote:I didn't read the CLI spec (that's ECMA 335) but seeing as how the compact framework uses a somewhat different GC/allocation subsystem, it stands to reason that the spec only mandates external behavior that would affect applications directly.
And what about the memory model, did MS properly write the specification?
Quote:I don't see how swapping comes into the mix. The runtime is not, to my knowledge, aware of system wide memory pressure concerns, or what's being swapped. A GC is triggered when the current managed heap for the current process is full, and if the heap is still full after the GC, then a new segment is allocated into the heap. (As a sidenote, I've seen your past comments on Windows swap behavior, and I think you're completely and utterly wrong.)
Of course the major slowdown in GC based languages is when one piece is swapped on the HD, and GC wants to release it at any cost. Considering windoze doesn't have too smart swapping it's bad news.