• Advertisement

Archived

This topic is now archived and is closed to further replies.

C# Cons?

This topic is 5361 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been reading up on C# and found that it does many of the things I've been writing code for with the CLR. And it has a few other advantages to C++ (biggest one is Garbage collection which I'm I can't wait till MS Office, IE Explorer, Outlook and all those applications start using). But nothing that good comes free, I know it takes more processing power because everything is passed through another DLL thats constantly running the the background (CLR) but what else? Do you loose some of C++'s low level power? Do you loose the speed of the created binaries? What about fail safes, as in const keywords and the such, I've heard they got rid of many "useless" keywords but I personally love them as compiler errors are much better than run time. Pat - Ex nihilo nihilus [edited by - patindahat on August 10, 2003 4:35:24 PM]

Share this post


Link to post
Share on other sites
Advertisement
"Lose," not "loose." Pet peeve.

Anytime you run on an abstraction layer you lose some low-level power. Question is how much of it do you really use? At work we frequently encounter code that uses pointers and have to invent workarounds as we port things to C#.

I miss templates quite a bit in C#, thats why I continue to develop things in C++ for the moment.

They didn''t get rid of "useless" keywords (I can''t think of any keyword that''s useless at this second), they added more to complement the increased feature set. Most are pretty reasonable, my favorite has to be "unsafe." If only I could #define that to be "dangerous" or "risky."

There is a wealth of discussion on the net about C# performance tradeoffs, including one very long post detailing why the garbage collector works the way it is, and why there isn''t deterministic finalization. Its worth reading.

Share this post


Link to post
Share on other sites
Thank you for the opinion.... Thats excatly what I''m looking for. One last quick question before I make my decision on which version of visual studio to buy, can you write unmanaged C++ code in Visual C++ .NET ?

Pat - Ex nihilo nihilus

Share this post


Link to post
Share on other sites
quote:
Original post by flangazor
What does this thread have to do with Cons? There are no cons cells mentioned anywhere in this thread.


Do you feel like you were tricked into reading this thread because of the title?

Share this post


Link to post
Share on other sites
quote:
Original post by Ratman
quote:
Original post by flangazor
What does this thread have to do with Cons? There are no cons cells mentioned anywhere in this thread.


Do you feel like you were tricked into reading this thread because of the title?

I do. I want my money back.

Share this post


Link to post
Share on other sites
Closure <= Object
Reading this paper on lambda-dropping, I was reminded of something that nagged me while writing the Hotdog Scheme compiler. The other functional language compiler writers were using the same encoding for .NET''s OO bytecode language (CIL) as they did for C: lambda-lift all functions to reduce environments, and make direct calls to static methods with the environment, should there still be one, as the first argument. Rather than use this encoding, I implemented the other obvious encoding: a closure is an object with a single "apply" function, where private fields in the object are used to capture the environment (I used a private array). I quickly added a "case-lambda" form to generate a closure with multiple entry-points distinguished by arity. This made + and others faster because I didn''t handle varargs inefficiently in most cases.

The criticism of this approach is (1) object allocation is slow, (2) accessing the environment variables is slow, (3) virtual method calls are slow, (4) it generates too many type definitions. (2) is false: OO runtimes are optimized to access private fields very fast; furthermore, accessing an array, as I do, with a constant index value gets compiled into a direct access with a little indirection. (3) is false: virtual calls are a few machine instructions more expensive than a static call and only matter when a virtual method is several layers up in the type hierarchy. (4) is true, but it''s impact on program performance is dubious. (1) is true, but closures are not usually allocated at a very fast clip like cons cells are.

Conventional closure conversion allocs a vector, where the first cell is a pointer to the function and the rest contain the environment values. The function is called with the vector passed in as the first argument. This is EXACTLY how instance methods work. The difference between a closure and object is that a object can contain an arbitrary number of function pointers and has runtime support to dispatch to the right method. Therefore, as the title of this post implies, an object is equal to and greater than a mere closure. The trick is to exploit objects in new ways to make closures execute faster.

Two simple things should improve the generated code with respect to (1) and (4). First, do everything possible to get things into tailcall form, which would compile into ordinary loops. Second, closures which do not escape their definition do not need to be allocated; instead, generate a private method within the same closure class. Both of these reduce the number of closures, i.e. types, and reduce the number of allocations. Mostly top-level definitions will be allocated (only once) and a few first-class functions. This should vastly improve things.

A combination of incremental lambda-lifting and parameter dropping can be used to get mutually recursive inner lambda forms to share the same signature. This way, tailcall, which is slow on .NET, can be replaced with a jump instruction. Local-CPS analysis, described by Reppy, can find more loops, which will be really fast. Also, propagating type information and fixing the method signatures should make private method calls more efficient by avoiding typechecks and box/unbox overhead. This is a tougher one for Scheme. If the damn inliner on .NET worked right, these optimizations would allow the runtime to inline hotspots and produce really fast code.

All this would apply to a single "define" form and all the internal lambdas. However, by wrapping a module system around a group of "define" forms, the analyses could be applied to the whole module. And if you run your Scheme progam in "unverified" mode, it could approach the speed of cleanly written C# code. That ain''t so bad.



Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by antareus
...including one very long post detailing why the garbage collector works the way it is, and why there isn''t deterministic finalization. Its worth reading.

Do you mean Brian Harry''s "Resource management": http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&P=R28572

Share this post


Link to post
Share on other sites

  • Advertisement