# why c++ doesnt have many features of many modern languages like c# languages?

## Recommended Posts

44 minutes ago, Sound Master said:

What do you mean linked list cache misses ?

The list nodes can in theory be scattered all over the memory. If that's the case, multiple successive accesses to the same list can cause more cache misses, if the distance between the nodes is to far. Arrays on the other hand are always stored in a single block of memory.

##### Share on other sites
Posted (edited)
1 hour ago, _Silence_ said:

Why do you think that iterators are obsolete ?

Few days ago, while wanting to optimize a part of a project, I removed an std::vector since I was doing many push_back in it, and replaced it with a good old C-style array. And you know what ? std::vector was far more fast.

Except if you deal with big-data (and few are here), std::vector will give you very good performances, when used well.

Linked lists are obsolete because they:
* Create serial dependencies instead of using multiple memory loads at the same time.
* Fragment the cache with pointers possibly larger than the element itself.
* Create cache misses when iterating over all the elements, because each element is a heap allocation on an unspecified memory location and the cache predictor wants linear order.
* Are difficult to split into multi-threaded workers.
* Are many times slower than arrays for the common uses no matter the size.

The creator of C++ had an in depth presentation about how linked lists have become horribly slow.

Iterators lost their original design purpose because of linked lists
They are an abstract interface for handling collections without knowing if it's a linked list or array under the hood. With linked lists out of the picture, you might as well specialize for the case of consistent memory blocks and make faster and safer abstraction layers. The main problem with iterators is that they are essentially just pointers, but not the modern ones that have reference counting, have zero-overhead safety features, or iterate in strides of many elements for SIMD operations.

I might be missing some favorite feature of yours, but from my perspective it's not worth the risks posed by using two raw pointers like that, the cost of porting between languages, and the learning curve for new C++ developers.

Regarding std::vector, it's possible that you compiler's implementation used calloc instead of malloc and memset, which is faster by not fetching garbage data into the cache.

Edited by Dawoodoz

##### Share on other sites
Posted (edited)

One of C++'s fundamental design philosophies is that the runtime cost of things should be minimized, even if that means omitting something that makes life easier for the programmer, if there is a chance that the convenience feature could lead to unexpected runtime overhead.

I work almost exclusively on C# projects, and our team is full of programmers who don't fully understand the ramifications of the language features they're using.  I see code where every single member access has been changed to use ?. even when the objects can be trivially shown to never be null at that point in execution.  Every single one of those gets expanded into a runtime null comparison and potentially complex branching.  I suspect that this being heavily abused is one of the reasons that C# 8.0 introduced support for non-nullable reference types, but I haven't had experience actually using that to see if it solves this abuse or not.

Of course it's always possible that programmers can misuse C++ to the same extent, but if the language has fewer convenience features then a programmer has less stuff to potentially abuse.  In my opinion C++ is better off if it sticks to its fundamental strengths and doesn't sacrifice those strengths trying to appease everyone.

Edited by Nypyren

##### Share on other sites
5 hours ago, Dawoodoz said:

Iterators lost their﻿ original design purpose because of linked lists
They are an abstract interface for handling collections without knowing if it's a linked list or array under the hood.﻿

Definitely not the case.

There are still tons of very useful data structures other than linked lists which aren't represented as (purely) sequential elements in memory.

One exceedingly-common-in-games examples: chunked arrays. An "array" that is a list of fixed-sized chunks, where those chunks might be sequenced in another array or in (gasp) a linked list. This is a _very_ common data structure for storing _large_ "arrays" of items since it allows much more efficient handling of memory blocks/pages and mutation of contents, and it's still very cache-friendly so long as the chunks are of an appropriate size. Such a data structure can still be random-access (if the chunks are stored in an array) and hence can be efficiently std::sortd, etc.

Even in the standard library itself, types like std::deque or std::map/std::unordered_map make use of iterators for such _basic_ functionality as "debug print all the values."

There's tons and tons and tons more uses here. Database query responses. File I/O. Lazy computation or generators. Filters and transformation pipelines. etc.

There's a very valid argument to make that the incoming ranges feature could long-term supersede iterators (since the idea is that almost every pair of iterators could be replace by a range, and even most raw pointers can be replaced by std::span, for improvements both in ergonomics and safety). That's a very different discussion and has abolutely nothing to do with linked lists, though.

##### Share on other sites
15 minutes ago, SeanMiddleditch said:

Definitely not the case.

There are still tons of very useful data structures other than linked lists which aren't represented as (purely) sequential elements in memory.

One exceedingly-common-in-games examples: chunked arrays. An "array" that is a list of fixed-sized chunks, where those chunks might be sequenced in another array or in (gasp) a linked list. This is a _very_ common data structure for storing _large_ "arrays" of items since it allows much more efficient handling of memory blocks/pages and mutation of contents, and it's still very cache-friendly so long as the chunks are of an appropriate size. Such a data structure can still be random-access (if the chunks are stored in an array) and hence can be efficiently std::sortd, etc.

Even in the standard library itself, types like std::deque or std::map/std::unordered_map make use of iterators for such _basic_ functionality as "debug print all the values."

There's tons and tons and tons more uses here. Database query responses. File I/O. Lazy computation or generators. Filters and transformation pipelines. etc.

There's a very valid argument to make that the incoming ranges feature could long-term supersede iterators (since the idea is that almost every pair of iterators could be replace by a range, and even most raw pointers can be replaced by std::span, for improvements both in ergonomics and safety). That's a very different discussion and has abolutely nothing to do with linked lists, though.

I'd just map a Lambda or something like I do with octrees, but fair enough if you have a safer alternative.

##### Share on other sites

I think the standard library is outdated too but in my opinion for reasons like code style, conventions and the heavy nesting of templates. I know that all of these was/is usefull and it can't be changed that easy to not miss backwards compatibility with old code, but nowdays I would prefer a more modern naming. I know that such legacy systems often have these problems, especially in Web; HTML 4 is still supported in all modern browsers because for not breaking the internet and also Javascript still has old features developers I talked to would like to get rid of.

But on the other side maybe it would be wise to make a break instead of forcing new features into the language and so the standard library like Lambdas (they are prohibitted where I work because they are a potential performance issue). What's wrong with writing Vector instead of vector or SharedPtr or whatever, or have a C# style generic list instead? I know everybody has his/her prefered style but as languages like C# are widely used these days, even those styles have changed since C++ was introduced for the first time and most coding guidelines prefer the title case.

I also use iterators very rarely because I got to the point that in our case, a good old for loop did the trick too. It is like IEnumerator/ IEnumerable in C#, there are use cases for it but you won't use them all the time. But I'm not strictly against iterators or think they are outdated. I even implemented one by my own for the reflection system to iterate over the list of member fields/ functions. As I mentioned, they are like their C# siblings and can be used whenever a simple index-based iteration is not possible.

15 hours ago, _Silence_ said:

replaced it with a good old C-style array. And you know what ? std::vector was far more fast

Dont' know how you used vector in your code but if you used push_back all the time, without deeper knowledge how you replaced the old code in your project with c-style array, I think it was slow because you too often allocated memory on every add of a new element !?

The beneifts of vector is that it is a growing array (anything between the normal c-style array and C#'s generic list) so on a regrow, there is much more memory allocated as you actually use. The capacity grows every time the vector runs out of memory so allocations are pretty often in the first few pushes and become less often later. But I think you already know that, do you?

If used correctly, as a static block of memory, c-style arrays should beat vector in performance

##### Share on other sites
3 hours ago, Shaarigan said:

If used correctly, as a static block of memory, c-style arrays should beat vector in performance

If used correctly, a single-allocated vector (capacity is known at creation time, same as an array), vector will always be faster than array because optimizing the data flow of memory pointer to by degenerate pointers is hard.  We had to disable the part of GCC that optimizes that because it would lose track of the fact that memory was being written to and treat the pointer (array) as read-only as it travelled through function calls, under certain edge conditions (eg. a lambda reference capture in which it was the second capture and the first capture was larger than 12 bytes -- the life of a compiler support engineer can be interesting).  The compiler always has knowledge about a vector as it gets passed around, so that problem can not happen and amazing optimization opportunities can obtain.

3 hours ago, Shaarigan said:

What's wrong with writing Vector instead of vector or SharedPtr or whatever

The C++ standard library was developed a generation earlier than Microsoft's internal coding guidelines were even though of.  Microsoft's style is based on the style of Apple Toolbox they copied, which in turn was based on the Pascal language.  The Pascal conventions developed in Europe differed from the Unix and C conventions developed at Bell Labs in the USA in many ways, including the use of upper-case letters in identifiers and the use of non-alphanumeric characters that do not appear on localized keyboards in the respective countries (trigraphs anyone??).  One is not objectively better than the other.

It's not really reasonable to dismiss something just because it's not what you're used to.  The answer to the question of what's wrong with "Vector" instead of "vector" is this.  Nothing, except that's not how it is.  A decision was made between arbitrary choices in which one had the greater weight of tradition and consistency at the time, and trying to retrofit later social trends has no tangible benefit and great cost.

As for me, I can't stand capital case.  I didn't cut my teeth on Microsoft, I learned using C, Fortran, Algol, Bourne shell, and everything on Unix and VMS all of which used all-lower-case.  Capital case is too unlike writing plain English, in which I capitalize only the first word of a sentence and some proper nouns. I do not Capitalize Common Nouns (identifiers) or any Verbs (functions).  Programs are lityerature intended for other readers, and forcing them to read Chaucer is undesirable.

##### Share on other sites
Posted (edited)

Thanks for the insight on vector optimizations over c-style arrays. I too thought until now that the latter are faster simply because i didn't take into account the work a compiler does in the background.

-------

Can't comment on the Use or Abuse of coding Styles and Guidelines, in some natural Languages Nouns, Names, Places, etc. start with a capital Letter and the Rest is lower Case.

🙂

Edited by Green_Baron

##### Share on other sites
6 hours ago, Shaarigan said:

What's wrong with writing Vector instead of vector or SharedPtr or whatever, or have a C# style generic list instead?

It's uglier and harder to read, and doesn't translate at all to languages that have no upper/lower case distinction, and translates only imperfectly to many languages that do?

Not trying to start an argument here, just pointing out that for any mostly subjective question like this, no matter how strongly you feel about it, there will always be people who are just as passionate on the other side.  In the end, it's far more important that you have a consistent style and stick to it than what the style actually is.  Well, as long as the style doesn't include camelCase anyway.

##### Share on other sites

So you are saying C# is for multicore.

Does that means now multicore microcontrollerss are being released there will be a C# comiler ?, i dont think so.

Lets see what the future brings, windows sucks anyways now a days, please dont ruing my microcontrollers with your C#.

Keep your negative things and fantasy RPGs here in this site please.

## Create an account

Register a new account

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 11
• 23
• 36
• 16
• 75