However, "pay for what you use" is exactly what makes C++ the most productive choice for the small sub-set of problems where it is (one of) the most productive language to choose from.
Certainly, hence why I said general case. I agree it's a design decision. I just think that a design that benefits a "small sub-set of problems" to the detriment of the others is objectively a poor one in a general purpose programming language.
I frankly don't see how that is at all contentious, unless you're saying C++ isn't a general purpose programming language
Ah, but that is a bit unfair towards C++. C++ was primarily designed not to force you into using something you don't need and not to add additional clutter or "secret magic stuff". But, at the same time, it was to allow you to use the "extra stuff" when you need.
Which has been found to hinder productivity greatly for the general case, and not really provide that much benefit as far as optimization goes. Hence the thread.
Smart pointers (at least one type of smart pointer) were part of the language pretty much forever.
Standardized in 1998, added ~1992 - about a decade after the language was released. And let's just say that adoption didn't really take place anywhere near those dates...
Do they require a programmer brain? Well yes, but what's the issue with that...
Which is overhead that is placed onto the programmer, leading to the sort of productivity losses in my original post. I'm not saying it's some horrible roadblock, I'm saying it's an inefficiency that effects every (non-trivial) thing you do in the language.
You can usually make a thousand lines worth of code well-behaved and guaranteed leak-free (also in presence of exceptions) using one or two smart pointers, with minimal, usually not measurable, added overhead.
Not in my experience. If you're doing any sort of OO programming, you're going to need polymorphism. Unless that thousand lines worth of code is simply bloat from static polymorphism (template metaprogramming), then there are some sort of pointers in there somewhere; a bundle of references to known types isn't going to cut it.
Now, one can like one approach or the other, one or the other design, that is a matter of taste (and application).
I disagree. It should be pretty conclusive at this point that (for the general case) 'pay for what you use' has decided detriments to productivity that arise from adapting the limited functionality to the different 'paid' functionality without providing meaningful optimization/performance benefits (in the general case).
You're right. That design decision doesn't make it all crap. Design by committee doesn't necessarily make things bad.
Like I said, having pointers in the language isn't even close to the biggest productivity losses compared to others. They're damned useful when you need them.
But that wasn't the argument I was addressing; it was that a garbage collector is a band-aid for poor design. Which is crap.
It's trying to juggle the code so that you're sure to delete things that need deleting and the pointers to them get there. That overhead is not trivial.
If you are constantly juggling with deleting objects, then you are doing something seriously wrong, and it's hardly fair to blame the language for that. In a well designed C++ program, it's fairly obvious where something has to be deleted. However if you start returning pointers from functions and expect the caller to delete it at some point, or pass pointers as parameters to functions and expect that function to delete it for you, yes then you will be juggling with deletes for some time. But doing that kind of reckless coding will result in unmanageable code in every language.
No, that is pretty much standard in any non-C/C++ language. Even C does that liberally with stack allocated objects. I agree that well designed C++ programs make it obvious where something has to be deleted. This is because a well designed C++ program focuses on that rather than what needs to be done. Other languages (including modern, smart-pointer happy C++) aren't locked into how they need to design their program.
But honestly, that sort of thing isn't what I meant. I acknowledge that returning bald pointers is bad and often avoided practice. But even within a single method, properly cleaning up dynamically allocated objects without smart pointers in all of the different exception cases is tedious. It involves a lot of code that obfuscates what you're actually doing, and a lot of code that fallible programmers will screw up, and a lot of code you wrote that doesn't actually advance your game towards completion. It's overhead.
Also, C++ does allow you to use unique_ptr for that specific job. So again, it's not really fair to blame a language to be counter productive because you choice not to use some 'safer' version.
I was ignoring smart pointers since you deemed them the 'solution for the symptom'. I fail to see how it is not the language's fault that we had to build crutches for it. That we now have to spend time worrying about what smart pointer is appropriate here, how to handle libraries (that invariably use bald pointers) safely...
And to be blunt, pointers aren't a big deal as far as productivity goes. But to say that GC's just make up for poor design is naive at best.
Enh, that code wouldn't be uglier in C#. You would still have the structs for device/command and still have the array for the variable behavior. The issue would be that C# delegates don't have the same performance characteristics as the function pointer, meaning you don't gain your cache benefits.
That said, that sort of virtual dispatch optimization is right in the wheelhouse for things that JIT'ed languages can optimize that C++ can't.
I see people talking about programmer productivity a lot, but I really wonder if this is really noticeable.
It is absolutely noticeable. Despite my reputation as a C++ hater, I spent about a decade using it as my primary language. Just switching to C# provided me about an order of magnitude productivity increase.
For example you have an NPC referencing another NPC as target, and this NPC is then deleted.
Sure, you have to deal with that anyways, but more often than not this scenario isn't your problem. It's trying to juggle the code so that you're sure to delete things that need deleting and the pointers to them get there. That overhead is not trivial.
That is certainly one part of the productivity gains. Another is the ability to have a large, well-written and modern standard library.
But what really takes the cake is tooling. C++'s design is so antithetical to partial-evaluation that you can't even get decent error messages out of the thing, let alone intellisense or refactoring tools.
It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.
Have you looked around recently? Java's neglect and the universal distrust of Oracle have neutered its use in new development that isn't on Android. Scala hasn't gained a foothold due to its dependence on the JVM (and hence, Oracle) and its over-complexity. C++ hasn't been used for business development for more than a decade (and no, C++/CX isn't going to help that since Windows 8 is being adopted by few people in mobile, and far less on a desktop). What else is there? Python? Not for Enterprise development. Objective C? Not outside of iOS.
C# might not be popular in 5 years (and I expect it will be waning by then), but it will be because something superior comes to replace it. Until then, even Microsoft doesn't have much that clout.
The composite pattern requires that your composite object (that fulfills some interface) contain some (possibly 0, possibly variable) amount of other objects (that fulfill that same interface), allowing you to do the interface's operation with respect to the bundle of objects.
For example, a composite renderable might have a bunch of sprites and when you call render on the composite, it renders all of the sprites. Or when you call move, it moves all of the sprites. Or when you call 'Left' it returns the leftmost boundary of any of the sprites.
That's the biggest load of garbage I've read in a long time.
It is FAR more likely that you're going to overlook some bound or overflow some buffer than you kneecap your game because you ran out of memory via the boogeyman of inefficient standard library structures.