Jump to content

  • Log In with Google      Sign In   
  • Create Account


What kind of optimization makes C++ faster than C#?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
70 replies to this topic

#41 SimonForsman   Crossbones+   -  Reputation: 5719

Like
1Likes
Like

Posted 29 December 2012 - 12:07 AM

<blockquote class="ipsBlockquote" data-author="samoth" data-cid="5015132"><p>The same is true for C/C++ versus hand-written assembler code. I've been writing assembler since 1983, but I regularly find it hard to match (match, not outperform!) optimized C++ code nowadays. If anything, I use intrinsic functions now, but writing "real" assembler code is a total waste of time and an anti-optimization for all I can tell. The compiler does at least as good, and usually better. You may be able to work out an example where you gain a few cycles over the compiler, but on the average you'll be totally anihilated.</p></blockquote><br />its not just compilers getting better, hardware has also become far more complex, the 286 for example had no cache and a 1 stage pipeline all instructions were executed in the order they were written, you could take any section of your code, check the manual and be able to tell exactly how long it would take to execute, optimizations could be done on paper without any problems, Todays CPUs are complex and scary monsters, the speed at which a given piece of code runs depend heavily on the context in which it runs so optimizating it can require quite extensive analysis.<br /><br />Our brains aren't improving at any noticable rate so unless we stop increasing CPU and software complexity we will have to push more of the grunt work over to the machines that are improving.

That doesn't mean that we have to abandon the ability to fiddle at a lower level when necessary though, C++ offers a fairly good balance between high level functionality and low level access and since it is reasonably easy to integrate lua or python as scripting languages in a C++ application the lack of higher level functionality becomes less of a problem. (I'd personally avoid using C++ when possible but it is still a language worth learning)

Edited by SimonForsman, 29 December 2012 - 12:24 AM.

I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

Sponsor:

#42 Karsten_   Members   -  Reputation: 1371

Like
-4Likes
Like

Posted 29 December 2012 - 06:37 AM

Over the next 5, 10, or 20 years, the differences and advantages of C++ or C# will continue to close, 

 

This brings me onto my biggest issue with C#.

 

It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

 

They are already advising people to develop using C++/CX rather than .NET in many of their talks.

 

Anyone remember Visual Basic? It used to be as popular as C# is today. And now since they dropped it and then emulated it using .NET, now it is a niche (teaching) language.

 

Anyone remember J#? Microsoft used it to compete with Java in the short term and then dropped it in about 1 week. Sure C# is to compete with Java in the long term but then will be dropped like a sack of potatoes in the same amount of time.

 

Anyone remember Microsoft Managed C++? That is the last time I waste effort learning a non standard C++ extension. Now I have code that is effectively useless without a serious amount of time and money porting. What is worse is that the only platform it runs on is now EOL.

 

So I advise developers to stop messing around with novelty languages, and use the standard C++ language so your customers can still play your games in a few years time once platforms have changed. Otherwise I find it a tad careless tbh...


Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.


#43 Hodgman   Moderators   -  Reputation: 27031

Like
6Likes
Like

Posted 29 December 2012 - 07:21 AM

It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

It's defined by ECMA and ISO/IEC standards. It's used to make games on Sony, Nintendo, Apple and Google platforms. There is an open-source implementation of the ".NET runtimes" (CLR) for Linux and Mac. All that is out of Microsoft's hands.

Anyone remember J#?

Microsoft's Java knock-off evolved into C# when they started losing too many lawsuits over their attempts to kill Java...

Anyone remember Visual Basic? It used to be as popular as C# is today. And now since they dropped it and then emulated it using .NET, now it is a niche (teaching) language.

The "old" VB stopped evolving in 1998, and was a niche language back then, used for teaching and 'scripting' mainly. It's still supported to some degree, with it's runtime being available on modern OS's...
The latest update to the current incarnation of VB (A.K.A. VB.NET) was this year! It doesn't make sense to say it's "emulated using .NET" -- it's a CLR language, like C#, which means it's compiled to CIL, like C# is, which means it can run on Mono, as above, so it's also usable in areas outside of Microsoft's control.

 

Unlike C#, neither the new or old incarnations of VB are defined by an open-standard, so you're choosing to become dependent on a specific vendor when you choose to use them. Despite this though, Mono does include an open-source, non-Microsoft compiler for modern VB (not the old VB). So, ironically, the new VB that you hate is actually less tied to Microsoft (i.e. more able to survive without their support) than the old one...

 

Anyone remember Microsoft Managed C++? That is the last time I waste effort learning a non standard C++ extension.

It wasn't really a C++ extension (despite being called "Managed Extensions for C++"), it was a (crappy) port of C++ to the CLR.

 

If you wanted to write in a very-C++-style language, but create programs for the CLR platform, then it was a necessary evil at the time. It was very badly designed though, so they got people who knew what they were doing to instead make C++/CLI, which does the same thing, but is better designed. C++/CLI, like C#, is also defined by an ECMA standard, meaning it's out of Microsoft's control. However, AFAIK, no-one else has bothered to make a C++/CLI compiler, for whatever reason.

C++/CX is another stand-alone language, but for people who want a C++-style language and create programs for the WinRT platform.

Personally, yes, I'd avoid all of these, unless you're forced to use those platforms, and have no real choice...


Edited by Hodgman, 29 December 2012 - 07:48 AM.


#44 samoth   Crossbones+   -  Reputation: 4466

Like
3Likes
Like

Posted 29 December 2012 - 09:57 AM







Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.
Unluckily, this is exactly true. I don't like it any more than you do, but it is true. The biggest advantage of C++ over C (and C#/Java over C++) is that you can hire a mediocre programmer to do the same thing that a very expensive highly skilled programmer could do otherwise, and in 1/2 to 2/3 of the time.
Don't get me wrong, I'm not saying that C# programmers as such are inferior in any way. What I'm saying is that someone at considerably lower skill using C# can outperform someone at higher skill using C++ time- and cost-wise (replace C# with Java if you will). C# and Java come with huge standard libraries that are not only very complete, but also very easy to grok. Plus, automatic memory management.
That means that a programmer needs to have a lot less skill (and needs to use less time) to produce "something that works". Maybe not the best possible thing (this still requires someone with skill!), but something that works.

A lot of browser games are of sheer embarrassingly poor quality, and consume embarrassing amounts of resources to deliver something ridiculous in comparison. Who cares?
It takes a moderately skilled team 3 weeks to puke out something that sells. On the other hand, it takes a highly skilled team 3 years to produce something really good that also sells, but only 3 years later after all competitors have already sold theirs. From a business perspective, which one is better?

Quality or performance do not matter as much as you think. As long as it sells, all is good. Did you ever wonder why every incantation of [insert any software title] gets more bloated and slower without adding real value?

A WYSIWYG text processor / DTP used to fit on a floppy disk and run on a 8 MHz processor with 512kB of RAM in the mid-1980s. Written in C, on an operating system written in C, by the way. Computers at that time were entirely capable of performing well with C.

A program that does (apart from greatly improved but still nowhere near perfect spellchecking) exactly the same today runs on a computer with about 3000 times as much CPU power and about 8000 times as much main memory. And, it doesn't truly work "better" or faster in any observable way.
Such a program typically has a working set upwards of 100 MiB just for showing an empty window, reserves upwards of 300 MiB of address space, and takes anywhere from 300 to 900 MiB on your harddisk.

So what is the conclusion? Software companies deliberately produce bad software to force people into buying bigger and more expensive computers? Of course not.

It is just much, much better for business. As long as people keep buying, you're only reducing profit by doing better. The good horse only jumps as high as it needs to. It isn't worth hiring a team of highly skilled people for something a low-wage guy can do, even if that means it's 30% slower (as long as people still buy).
Moore's law [...]
Moore's law was initially a 10 year extrapolation of some observation made by an Intel founder based on (questionable) data. It however turned out being a very clever marketing strategy followed ever since, and that is all Moore's "Law" really is. Marketing.

C and C++ were very affordable on 15, 20, or 30 year old hardware, even with compilers of that time. A lot of very serious, good programs on the Atari ST and Amiga were written in GFA BASIC, which offered both a bytecode interpreter and a compiler. The performance of the GFA BASIC compiler was entirely sufficient for 99% of anything that you'd ever want to write at that time.

Every software running on the BeBox in the mid-90s was written using the Metrowerks C++ compiler (initially you had to cross-compile from Mac, what a joy!). Compared to today's compilers, MW C++ was embarrassingly poor. However, this was never an issue. Comparing my old dual-CPU 66MHz BeBox to my modern 4-core 2.8GHz Windows system, I see no substantial improvement in the "general feel" of most programs.
C is still basically the same language as it was in the 80s.
Well, yes and no. It is of course "basically" the same language, but that is true for C++ or Java too.

C has, over the years, gone a long way to make many things easier, more explicit and efficient, less ambiguous, and safer (headers like inttypes/stdint, restrict pointers, threading support, bounds checking, alignment, static assertions). In some way, if you compare C11 to, say, C89 or C90, or to K&R's original invention, it is "some completely different language".
The same is true for C++ (and probably Java, I wouldn't know... have not used Java since around 2003).
...it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code
This a very obvious truth, which should be clear even without reading academic papers.

JIT compiled code may, in some situations, and depending on the programmer's skill, perform better. A poor C# programmer may easily be able to outperform a poor C programmer, simply because the C# standard library is well-optimized, and a poor C programmer might not be able to properly implement a competitive algorithm. However, the same is not true when comparing skilled programmers.

In the end, anything that comes out of a JIT compiler is executed as native machine code, so assuming proper input (i.e. equally skilled programmer) it can only ever be equally fast, never faster. However, other than a normal compiler, a JIT compiler has a very hefty constraint, namely it has to run in "almost realtime". The end user expects something to happen more or less instantly when launching a program. Nobody wants to wait a minute or two. Or ten. Caching does help, but only to some extent.

A normal optimizing compiler runs offline on the developer's machine, and this happens just once. It does not matter that much whether a release build runs in 15 seconds or 45 minutes or 4 hours (build times for non-release are a different story). It also doesn't really matter whether compiling takes 2 or 6 or 10 gigabytes of RAM, because the developer's machine will have that much -- the end user doesn't care.

Therefore, the compiler has a lot of opportunities and a lot of freedom in what it can do that a JIT simply cannot afford. With that in mind, JIT can, in general, not be faster than a normal compiler either. It just isn't realistic, no matter how clever JIT gets.

Think of playing chess against Anatoly Karpov, except Karpov only has 2 seconds for every move, and is allowed to look at only half of the board. You, on the other hand, can take any amount of time you like, use a chess computer, and may consult any amount of experts you want. He may be the best chess player in the world, but it is astronomically unlikely that he will win.

Edited by samoth, 29 December 2012 - 10:38 AM.


#45 Telastyn   Crossbones+   -  Reputation: 3712

Like
1Likes
Like

Posted 29 December 2012 - 05:48 PM

I see people talking about programmer productivity a lot, but I really wonder if this is really noticeable.

It is absolutely noticeable. Despite my reputation as a C++ hater, I spent about a decade using it as my primary language. Just switching to C# provided me about an order of magnitude productivity increase.
For example you have an NPC referencing another NPC as target, and this NPC is then deleted.

Sure, you have to deal with that anyways, but more often than not this scenario isn't your problem. It's trying to juggle the code so that you're sure to delete things that need deleting and the pointers to them get there. That overhead is not trivial.

That is certainly one part of the productivity gains. Another is the ability to have a large, well-written and modern standard library.

But what really takes the cake is tooling. C++'s design is so antithetical to partial-evaluation that you can't even get decent error messages out of the thing, let alone intellisense or refactoring tools.
It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

Have you looked around recently? Java's neglect and the universal distrust of Oracle have neutered its use in new development that isn't on Android. Scala hasn't gained a foothold due to its dependence on the JVM (and hence, Oracle) and its over-complexity. C++ hasn't been used for business development for more than a decade (and no, C++/CX isn't going to help that since Windows 8 is being adopted by few people in mobile, and far less on a desktop). What else is there? Python? Not for Enterprise development. Objective C? Not outside of iOS.

C# might not be popular in 5 years (and I expect it will be waning by then), but it will be because something superior comes to replace it. Until then, even Microsoft doesn't have much that clout.

#46 3Ddreamer   Crossbones+   -  Reputation: 2902

Like
-1Likes
Like

Posted 29 December 2012 - 07:54 PM

The industry and specifically Microsoft have development cycles which effect what is happening.   The standard business model is to overlap and stagger the cycles where possible to smooth the income flow. 

 

This brings me onto my biggest issue with C#.

It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

 

C# and libraries are simply far too good to drop any time in the foreseeable future.

 

Microsoft and millions of companies and individuals use C# worldwide.  In business applications, C# use is accelerating (even in non-USA markets), as is the case also with several major languages, especially in scripting.

 

Karsten, why do you believe that C# is so popular?  Do you think it is because Microsoft promotes it or does the use of C# grow because it is a fantastic language with great existing support?  C# and libraries continue to evolve and keep pace with technology mostly independent of Microsoft investment, so why would that be different in 5 or more years?

 

C# is not only the .NET Framework standard which Microsoft created, but one of the ECMA standard languages which anyone can and does use independently of Microsoft support.  C# development has truly taken a life of its own. 

 

 

So I advise developers to stop messing around with novelty languages, and use the standard C++ language so your customers can still play your games in a few years time once platforms have changed. Otherwise I find it a tad careless tbh...

 

 

No single standard language exists across the whole field of development. C# dominates in some segments of the development industry and C++ does in others, while another language may dominate in a narrower niche.

 

C# is widely accepted, used, and massively invested in the billions of dollars.  C# is no novelty language. 

 

These languages will be widely used for many years: C, C++, C#, Java, Python, and Lua  It would not surprise me at all if they stayed in common use 10 or 20 years from now after the next big language is introduced, as has happened when C# was published.

 

 

C# and libraries are evolving to reach higher and lower in coding, but in far less convoluted growth than C++ did before it.  Much of this had to do with tighter standardization with C# than had occurred in the C++ lifetime previous.  Industry cooperation and standards made this possible, but many people aren't aware of that overseeing.

 

C# improvements are a direct cause and effect relationship of industry associations to standards.

 

 

so your customers can still play your games in a few years time once platforms have changed.

 

 

Cross-platform implementations of C# exist which will allow current games made with it to be played years in the future on new systems and also still be playable on older ones.  Both non-Micosoft and Microsoft APIs exist which allow this.  It is true for all the other major languages, by the way. 

 

The language is not the issue with being cross-platform, backwards compatible, and forwards compatible:  The programmer skill in the use of APIs is the core of such cross-platform implementation.

 

Take the same game source code, if appropriately written, and the developer can use APIs to make it run by any framework of that language.  Lawsuits on Microsoft, Apple, Sony, Google, and other companies by governments and private parties have insured that this will be the situation for many years to come.

 

To say that oranges will disappear because the next hybrid apple appears is non-sense.

 

 

Clinton

 

 

 

 

 


Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software.  The better the workflow pipeline, then the greater the potential output for a quality game.  Completing projects is the last but finest order.

 

by Clinton, 3Ddreamer


#47 Hodgman   Moderators   -  Reputation: 27031

Like
3Likes
Like

Posted 29 December 2012 - 08:21 PM

I see people talking about programmer productivity a lot, but I really wonder if this is really noticeable.

It is absolutely noticeable. Despite my reputation as a C++ hater, I spent about a decade using it as my primary language. Just switching to C# provided me about an order of magnitude productivity increase.

It depends on the type of work you're doing.
For my engine's tool-chain, I use C#, because it really is easy to just get stuff done™ with it, but for the engine runtimes (which are more "systems programming" than "application programming", to use a shaky generalization) I'm more productive using C++, because C# code gets really ugly when doing systems-level tasks, while C++ makes it easy (or, is just as ugly as usual (-;)
e.g. I just posted some C++ code in a thread about optimized renderers -- writing that same code in C# would be a ton uglier and would take me a lot longer to write.



#48 Telastyn   Crossbones+   -  Reputation: 3712

Like
1Likes
Like

Posted 29 December 2012 - 09:08 PM

Enh, that code wouldn't be uglier in C#. You would still have the structs for device/command and still have the array for the variable behavior. The issue would be that C# delegates don't have the same performance characteristics as the function pointer, meaning you don't gain your cache benefits.

That said, that sort of virtual dispatch optimization is right in the wheelhouse for things that JIT'ed languages can optimize that C++ can't.

#49 3Ddreamer   Crossbones+   -  Reputation: 2902

Like
0Likes
Like

Posted 29 December 2012 - 10:06 PM

The OP (and many others) have just made the assumption that C# is slower or more cumbersome. I heard the same thing in the 90s that C++ was bloated and more cumbersome than C. I read the same arguments in the 80's that C was painfully slow and could never replace the skilled assembly-writing artisan.

The question has never been "will c++ be replaced", but "when". I believe we passed the tipping point a few years ago. It is now more difficult to get a seasoned C++ developer than to get a seasoned C# developer who is also more productive overall than that c++ developer.

 

Agree, I do here totally.  

 

Skill of the developer/ programmer has the most effect on performance of any aspect of game development.  Since C# programmers are growing in numbers, we can see where this is going.  The more programmer friendly nature of C# means that there will continue to be an increase in numbers and experience of C# developers.  Hardware advances will seal the deal in regard to C# being competitive or outperforming C++ development when C# experience is combined effectively with hardware performance increases in the coming years.

 

We are already seeing hardware and systems architecture taking into account the advantages of unmanaged languages and their increase in popularity, such as "Auto-threading" and "Auto-caching".

 

Clinton


Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software.  The better the workflow pipeline, then the greater the potential output for a quality game.  Completing projects is the last but finest order.

 

by Clinton, 3Ddreamer


#50 Hodgman   Moderators   -  Reputation: 27031

Like
0Likes
Like

Posted 29 December 2012 - 11:40 PM

Enh, that code wouldn't be uglier in C#. The issue would be that C# delegates don't have the same performance characteristics as the function pointer, meaning you don't gain your cache benefits.
Then it's not *the same* code. I'm talking about writing the same kind of low-level code where you're manually optimizing for cache-misses and load-hit-stores and branch-mispredictions and whatnot. Modern versions of C# have the tools to do this, but it's quite a deviation from the typical C# style.

If the task at hand is concerned with these kinds of details, then C++ is a more productive language to be writing in.

Edited by Hodgman, 30 December 2012 - 02:02 AM.


#51 kunos   Crossbones+   -  Reputation: 2149

Like
0Likes
Like

Posted 30 December 2012 - 01:23 AM

interesting example Hogman.

But there is some missing data here. How many FPS did this wizardry gave to your game? And how many days were added to the project by choosing a language that allows you to get to that kind of wizardry? Are we sure that that FPS improvement is worth more than the delay to the game release? Or, if we decide not to treat these days as an early release but as dev time, are you sure that those days wouldn't have brought a similar algorithmic improvement? And, are you sure the original problem would be there in the first place?

 

The point here is that nobody REALLY has the answer to those questions.. so it's always down to an a priori decision that each software team needs to do when the project starts.

For the first 3-4 months of developing of my current game, I had fun maintaining a C# only parallel implementation of the entire graphics engine during my weekends for fun... until I could go on playing with it, it has been actually a tiny tad faster than the C++. Would that be the case now? With much more pressure on the system? Who knows? It's all very unpredictable.. 

 

I always use an analogy with race cars.. you can have a race car setup on the edge and POTENTIALLY very fast but tricky to drive.. or you can have a reasonable good setup POTENTIALLY slower but easier to drive and push hard..are you really sure car 1 is coming up first at the end of the race?


Stefano Casillo
Lead Programmer
TWITTER: @KunosStefano
AssettoCorsa - netKar PRO - Kunos Simulazioni

#52 Hodgman   Moderators   -  Reputation: 27031

Like
0Likes
Like

Posted 30 December 2012 - 02:10 AM

But there is some missing data here. How many FPS did this wizardry gave to your game? And how many days were added to the project by choosing a language that allows you to get to that kind of wizardry? Are we sure that that FPS improvement is worth more than the delay to the game release?

The biggest optimization at this level of the code-base took some typical C++ code that was taking 8ms and reduced it's cost down to just 0.5ms (and that's without using any parallelization, which was also possible) -- taking us from well <30fps to comfortably >30fps, which is all that mattered, as we were vsync'ed to 30hz.
 
Just the core engine routines were written at this level of C++, by a very small team of expensive C++ programmers. The actual game itself was written by a much larger team in Lua (due to the productivity benefits!), with a budget of 16ms of CPU time on the main core per frame for all Lua code. Whenever this budget was breached (which happened often), some expensive Lua code would be ported over to optimized C++ code instead. These optimizations weren't delaying the release -- they were necessary to be able to release a playable product at all!


Edited by Hodgman, 30 December 2012 - 02:14 AM.


#53 MichaBen   Members   -  Reputation: 481

Like
0Likes
Like

Posted 30 December 2012 - 03:32 AM

 It's trying to juggle the code so that you're sure to delete things that need deleting and the pointers to them get there. That overhead is not trivial.

 

If you are constantly juggling with deleting objects, then you are doing something seriously wrong, and it's hardly fair to blame the language for that. In a well designed C++ program, it's fairly obvious where something has to be deleted. However if you start returning pointers from functions and expect the caller to delete it at some point, or pass pointers as parameters to functions and expect that function to delete it for you, yes then you will be juggling with deletes for some time. But doing that kind of reckless coding will result in unmanageable code in every language. If you have to rely on a garbage collector because your code is such a mess that you have no idea when to delete a pointer, you are doing it very wrong. Also, C++ does allow you to use unique_ptr for that specific job. So again, it's not really fair to blame a language to be counter productive because you choice not to use some 'safer' version.



#54 shadowomf   Members   -  Reputation: 319

Like
0Likes
Like

Posted 30 December 2012 - 05:28 AM

why do you believe that C# is so popular?

Because Java sucks. No really I imagine a junior developer just finishing college. Of course in college Java was the language that everybody had to use.

Now if I put myself in that position and would try out C# of course I would never want to go back to Java.

Whats the next step, of course putting C# in the educational system and you get even more mediocre but also much cheaper developers.

 

but one of the ECMA standard languages

Well the ECMA is living from doing standards, of course they take the chance to do a standard if a big vendor offers them a complete product. You can always find someone to standardize something.

 

These languages will be widely used for many years: C, C++, C#, Java, Python, and Lua It would not surprise me at all if they stayed in common use 10 or 20 years from now after the next big language is introduced, as has happened when C# was published.

Even longer, if you take maintenance into the account. Look at Cobol or Fortran, it's still in use even today.

 

And don't forget about these markets where the vendors have total control and everybody is happy with vendor lockin (SAP -> ABAP). Thoose languages will stay in use for decades.



#55 Telastyn   Crossbones+   -  Reputation: 3712

Like
0Likes
Like

Posted 30 December 2012 - 10:04 AM

 It's trying to juggle the code so that you're sure to delete things that need deleting and the pointers to them get there. That overhead is not trivial.

 
If you are constantly juggling with deleting objects, then you are doing something seriously wrong, and it's hardly fair to blame the language for that. In a well designed C++ program, it's fairly obvious where something has to be deleted. However if you start returning pointers from functions and expect the caller to delete it at some point, or pass pointers as parameters to functions and expect that function to delete it for you, yes then you will be juggling with deletes for some time. But doing that kind of reckless coding will result in unmanageable code in every language.

No, that is pretty much standard in any non-C/C++ language. Even C does that liberally with stack allocated objects. I agree that well designed C++ programs make it obvious where something has to be deleted. This is because a well designed C++ program focuses on that rather than what needs to be done. Other languages (including modern, smart-pointer happy C++) aren't locked into how they need to design their program.

But honestly, that sort of thing isn't what I meant. I acknowledge that returning bald pointers is bad and often avoided practice. But even within a single method, properly cleaning up dynamically allocated objects without smart pointers in all of the different exception cases is tedious. It involves a lot of code that obfuscates what you're actually doing, and a lot of code that fallible programmers will screw up, and a lot of code you wrote that doesn't actually advance your game towards completion. It's overhead.


Also, C++ does allow you to use unique_ptr for that specific job. So again, it's not really fair to blame a language to be counter productive because you choice not to use some 'safer' version.

I was ignoring smart pointers since you deemed them the 'solution for the symptom'. I fail to see how it is not the language's fault that we had to build crutches for it. That we now have to spend time worrying about what smart pointer is appropriate here, how to handle libraries (that invariably use bald pointers) safely...

And to be blunt, pointers aren't a big deal as far as productivity goes. But to say that GC's just make up for poor design is naive at best.

#56 samoth   Crossbones+   -  Reputation: 4466

Like
4Likes
Like

Posted 30 December 2012 - 10:54 AM

I was ignoring smart pointers since you deemed them the 'solution for the symptom'. I fail to see how it is not the language's fault that we had to build crutches for it. That we now have to spend time worrying about what smart pointer is appropriate here

 

Ah, but that is a bit unfair towards C++. C++ was primarily designed not to force you into using something you don't need and not to add additional clutter or "secret magic stuff". But, at the same time, it was to allow you to use the "extra stuff" when you need.

 

Smart pointers (at least one type of smart pointer) were part of the language pretty much forever. It's just that people would rather complain how much C++ sucks instead of using them where appropriate. Agreed, auto_ptr was not perfect, but it was near perfect for many occasions.

 

In the mean time, the language has evolved, the committee has realized the shortcomings of auto_ptr and have improved the library, not only in that respect. Is it more complicated? Of course it sometimes is, but only because the underlying ownership issues that you create are. The smart pointers, however, do exactly what they should do, nothing in addition, and nothing less. Do they require a programmer brain? Well yes, but what's the issue with that...

 

You can usually make a thousand lines worth of code well-behaved and guaranteed leak-free (also in presence of exceptions) using one or two smart pointers, with minimal, usually not measurable, added overhead. The externally observable effect is just as good as having everything "managed", but without the extra overhead for making the secret magic work everywhere. And this is just what C++ is about. It only makes you pay when you ask for it.

 

Now, one can like one approach or the other, one or the other design, that is a matter of taste (and application). But I would not use wordings like "cure for the symptom" or "crutch" for the memory management in C++ because that is really not what it is. It's a deliberate design decision.

 

If you don't agree with that decision, that's fine -- you should really use C# then (or python), it has a different design. But that doesn't automatically mean that everything else is crap just because it isn't designed after exactly the same principles.

 

Did it take over a decade to improve C++? Did this annoy the hell out of many people (including me)? Of course, but that is what design-by-committee is about. A design committee always makes any kind of process lengthy and complicated, that's just the way it is. It's not a miracle that someone at the top (1-2 people) dictating what shall be done happens in considerably less time than 40 or 50 board members meeting and discussing and trying to make every single one happy. This doesn't mean that what finally came out of it is necessarily bad, however.



#57 Chad Smith   Members   -  Reputation: 1041

Like
0Likes
Like

Posted 30 December 2012 - 11:23 AM

I always use an analogy with race cars.. you can have a race car setup on the edge and POTENTIALLY very fast but tricky to drive.. or you can have a reasonable good setup POTENTIALLY slower but easier to drive and push hard..are you really sure car 1 is coming up first at the end of the race?

 

It'd depend on the driver in the car.  If the driver in car 1 is just a better driver the chances of him winning of more than likely higher.  If you have the same drivers for each car then it'd depend on the drivers driving style (my whole family grew up around race cars, with some of my family actually racing pretty high up).

 

So by using that analogy with the code I'd say:

It'd depend on the programmer.  If the "better" programmer is working on code base 1 then the chances are more than likely higher.



#58 Telastyn   Crossbones+   -  Reputation: 3712

Like
0Likes
Like

Posted 30 December 2012 - 11:24 AM

Ah, but that is a bit unfair towards C++. C++ was primarily designed not to force you into using something you don't need and not to add additional clutter or "secret magic stuff". But, at the same time, it was to allow you to use the "extra stuff" when you need.

Which has been found to hinder productivity greatly for the general case, and not really provide that much benefit as far as optimization goes. Hence the thread.
Smart pointers (at least one type of smart pointer) were part of the language pretty much forever.

Standardized in 1998, added ~1992 - about a decade after the language was released. And let's just say that adoption didn't really take place anywhere near those dates...
Do they require a programmer brain? Well yes, but what's the issue with that...

Which is overhead that is placed onto the programmer, leading to the sort of productivity losses in my original post. I'm not saying it's some horrible roadblock, I'm saying it's an inefficiency that effects every (non-trivial) thing you do in the language.
You can usually make a thousand lines worth of code well-behaved and guaranteed leak-free (also in presence of exceptions) using one or two smart pointers, with minimal, usually not measurable, added overhead.

Not in my experience. If you're doing any sort of OO programming, you're going to need polymorphism. Unless that thousand lines worth of code is simply bloat from static polymorphism (template metaprogramming), then there are some sort of pointers in there somewhere; a bundle of references to known types isn't going to cut it.
Now, one can like one approach or the other, one or the other design, that is a matter of taste (and application).

I disagree. It should be pretty conclusive at this point that (for the general case) 'pay for what you use' has decided detriments to productivity that arise from adapting the limited functionality to the different 'paid' functionality without providing meaningful optimization/performance benefits (in the general case).

You're right. That design decision doesn't make it all crap. Design by committee doesn't necessarily make things bad.

Like I said, having pointers in the language isn't even close to the biggest productivity losses compared to others. They're damned useful when you need them.

But that wasn't the argument I was addressing; it was that a garbage collector is a band-aid for poor design. Which is crap.

#59 3Ddreamer   Crossbones+   -  Reputation: 2902

Like
-2Likes
Like

Posted 30 December 2012 - 02:00 PM

And don't forget about these markets where the vendors have total control and everybody is happy with vendor lockin (SAP -> ABAP). Thoose languages will stay in use for decades.

 

That's exactly right and we know also more specifically that such vendors use library extensions to fill the demands upon languages in ways more suitable to them, which is why C# is going to gain on C++ in many areas.  Using  a newer language such as C# eliminates much of the coding stagnation of having older C++ libraries and biases on them and also forces innovation in keeping with the demand for more cost effective program development implemented in newer systems, which has already been covered in this thread.

 

An independent and skilled programmer has much freedom to use C#, C++ or combine them, but I do not envy those corporate programmers who have time and resource constraints under pressure, even if they chose the easier C#. wink.png

 

 

Clinton


Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software.  The better the workflow pipeline, then the greater the potential output for a quality game.  Completing projects is the last but finest order.

 

by Clinton, 3Ddreamer


#60 Hodgman   Moderators   -  Reputation: 27031

Like
1Likes
Like

Posted 30 December 2012 - 07:03 PM

I disagree. It should be pretty conclusive at this point that (for the general case) 'pay for what you use' has decided detriments to productivity that arise from adapting the limited functionality to the different 'paid' functionality without providing meaningful optimization/performance benefits (in the general case).

Yeah, in the general case (whatever that is, I imagine writing corporate GUIs...), C++ isn't the most productive language, especially for junior staff to be using (they can actually be reducing instead of increasing the project's progress...)

 

However, "pay for what you use" is exactly what makes C++ the most productive choice for the small sub-set of problems where it is (one of) the most productive language to choose from.

e.g. If it didn't have manual memory management, then it would be a very unproductive choice in memory-constrained embedded systems. Manual memory management (the fact memory isn't abstracted away, and a GC forced upon you) is a key feature of the language that enhances it's productivity (in the specific case)!

 

But I would not use wordings like "cure for the symptom" or "crutch" for the memory management in C++ because that is really not what it is. It's a deliberate design decision.

QFE -- for a particular class of situations, it's a very, very useful decision.

 

 

Screw the 'general case'; I'm an engine programmer. I still measure system RAM in MiB, not GiB, and running out of RAM is a real concern, so I need to be able to track every single byte that's used. I need to be able to look at my high-level code and have a pretty good guess at what kind of assembly will be generated from it (and be able to debug with the two interleaved to confirm those guesses). I need to be able to treat memory as bytes and use memcpy. I need to be able to read data off disk and cast it to known structures without parsing/deserializing it. I have to make a lot of low-level optimizations, and make heavy use of the "pay for what you use" idea.

I also need to be able to support a productive "game programming" language, such as Lua or C#, but that comes after getting the foundations right wink.png


Edited by Hodgman, 30 December 2012 - 07:16 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS