Pointer help

Started by
15 comments, last by Tom Sloper 1 year, 1 month ago

Juliean said:

Sure. And using raw new/delete is guarnteed to lead to memory leaks, double-deletions, unexpected behaviour etc… So, whats worse? Personally, I'd just do

That's like saying there are no applications using new/delete that don't have serious bugs. You can get bugs in any code. I already outlined how you can easily get major memory leaks using std::std_shared (or any reference counting pointer) if you aren't experienced enough to know where not to use them.

	vector<std::unique_ptr<Person>> person = { std::make_unqiue<Person>(55,"Phil"), std::make_unqiue<Person>(61,"Sylvia"), std::make_unqiue<Person>(65,"Jeff") };

which is in my book just strictly superior in every way.

First I think to an average new C++ programmer coming into the language, this looks like spaghetti code. But assuming he masters things later, this still makes a large assumption of how you are going to use the data. Let's say you pass the first item into a function. That should be simple right? You either have to break your uniqueness (using get I think?) or transfer ownership. Transferring ownership means you have to transfer it back when you come out which is completely ugly. So lets say you simply get the pointer and pass it in. Now maybe the function saves it somewhere, so you are back to square 1 with possible bugs and all.

C++ is not safe to the same extent as GC languages are. Its main advantage is speed, but if you start making compromises to that, then it quickly because a worse choice than other languages where speed is paramount.

Note that before we get too sidetracked - I don't condone nor want to propose using std::shared_ptr exessively in any way. I rarely use it myself, unlike std::unique_ptr.

Ok I'll leave that topic then, but note that above I also covered a very simple case where std::unique_ptr can cause issues.

Gnollrunner said:
I'm willing to bet that if the OP had learned C or even old C++ the way it used to be taught, he never would have had to post this question here to solve a very simple issue. Not that I blame him for that.

See that I don't understand. How can you say that, when OP is obviously learning C++ by a book which was written way before for-each loops and consorts exist?

In his initial post he's using vector , begin and end which (correct me if I'm wrong) were not in the standard before C++11. None of this would I consider old school. Part the issue of starting here, is you miss the lessons about pointers to pointers and a lot of other basic stuff. That's not to say that what you wrote is wrong. On the contrary it looks pretty nice. However, when you learn it bottom up you inherently think about what's really happening (note I'm not getting into compiler optimizations since they generally should not change the result), so even if you do make a mistake like this, you quickly realize what you did. and fix it yourself. I literally see these kinds of questions all over the place in forums. Are programmers getting worse? IMO no. It's the way you learn the language now.

BTW I do use range based loops a lot, but 90% of the time, they are the ones I've implemented for my containers. I've met several programmers that didn't even realize you can do that because they never thought to look below the magic.

Advertisement

Gnollrunner said:
That's like saying there are no applications using new/delete that don't have serious bugs. You can get bugs in any code. I already outlined how you can easily get major memory leaks using std::std_shared (or any reference counting pointer) if you aren't experienced enough to know where not to use them.

No, its just dramatically easier to introduce bugs when using new/delete, as forgetting to delete something won't even be noticed until you eigther run out of memory or end of having some resource locked permanently. It's also about all the places where you have to write the deletes that is an issue. When a class member is a raw pointer that has to be deleted, you have to write that deletion in the destructor, copy-assignment operator and move assignment-operator (if your class has those), With exceptions, you have to always explicitely write some catch that deletes a temporary variable at that point where its currently “owned".
unique_ptr just takes care of that automatically.
So its not about having bugs or not, its about how easy is to have bugs VS avoiding them automatically.

Gnollrunner said:
First I think to an average new C++ programmer coming into the language, this looks like spaghetti code. But assuming he masters things later, this still makes a large assumption of how you are going to use the data. Let's say you pass the first item into a function. That should be simple right? You either have to break your uniqueness (using get I think?) or transfer ownership. Transferring ownership means you have to transfer it back when you come out which is completely ugly. So lets say you simply get the pointer and pass it in. Now maybe the function saves it somewhere, so you are back to square 1 with possible bugs and all.

I don't seen an issue here. unique_ptr doesn't strive to solve all problems related to when you pass pointers/references to another function. It just replaces all your new/deletes. Thats it. The rest of your application can stay the same. Yes, that means it doesn't remove the danger of dangling references, but thats ok. I don't think its intended to do that. At least I don't treat it like this. It removes a lot of bugs and overhead associated with manual memory managment, while keeping the performance/freedom of being able to just pass raw pointers around.

Gnollrunner said:
C++ is not safe to the same extent as GC languages are. Its main advantage is speed, but if you start making compromises to that, then it quickly because a worse choice than other languages where speed is paramount.

C++ does have one main appeal though, which I appreciate a lot: Zero-overhead abstractions. Yeah, things rarely are actually completely zero-overhead, but things like std::unique_ptr are as close as it gets. There is like one additional instruction for non-inlined sink-functions (where you need to transfer ownership), and actually no overhead at all when using the object stored in std::unique_ptr vs the raw pointer. So performance in this regard is a non-issue.

Similarily, range-based fors are actually most of the time negative-overhead, as I rarely see people manually cache the “end” iterator, and many people even use post-increment - something which that loop does automatically.

So sure, there are things that you can use that make things slower like reference-counting, but for the things that I mention applying to OPs case, that is not the case.

Gnollrunner said:
In his initial post he's using vector , begin and end which (correct me if I'm wrong) were not in the standard before C++11. None of this would I consider old school.

Yes, you are wrong. iterators have been around for as long as c++ existed. I did originally learn it back in 200X, and all the books thaugt iterators as the main way to iterate over a container.

Gnollrunner said:
However, when you learn it bottom up you inherently think about what's really happening (note I'm not getting into compiler optimizations since they generally should not change the result), so even if you do make a mistake like this, you quickly realize what you did. and fix it yourself. I literally see these kinds of questions all over the place in forums. Are programmers getting worse? IMO no. It's the way you learn the language now.

There's two main gripes I have with this:

a) At point do we draw the line? If what you say is true then even using oldschool-c++ still does not give you the full picture. Unless you have learned ASM and looked at what the compiler procudes and how that code is executed by the CPU, you don't fully understand whats going on in your code. But, I'm not saying you should need to learn it. My point is rather, it doesn't matter to the average user. Especially a beginner.

b) It's also a matter of the amount of information you can realistically learn. Nobody is going to learn c++ and understand everything. In fact, I've seen lots of beginners in every language just try to find examples that closesly fit what they want to do and copy/paste and modify it. Why is that? Its because they are overwhealmed. And no, thats not overwhealmed by modern features (the same thing happened back in the day), its overwhealmed by how complicated simple things like a loop are. Nobody is going to understand what that vector-iterator loop does. when they first learn c++, nobody is even going to remember how to write it. Nobody will exaclty understand the intrinsics of dangling pointers, memory-managment etc… in the first program they write, just because they have to new/delete everything themselves.

Thats my main point. I agree that it helps in c++ to know details, especially as an intermediate/expert, but you have to walk first to be able to run. And a lot of the modern tools help you to do that. But then again, you belive that this makes it harder to work with the language which I don't think is the case at all.

Juliean said:

No, its just dramatically easier to introduce bugs when using new/delete, as forgetting to delete something won't even be noticed until you eigther run out of memory or end of having some resource locked permanently.

That's no different as having loops using reference counting pointers. You get a memory leak and there is nothing that pops up and tells you that you have one.

It's also about all the places where you have to write the deletes that is an issue. When a class member is a raw pointer that has to be deleted, you have to write that deletion in the destructor, copy-assignment operator and move assignment-operator (if your class has those), With exceptions, you have to always explicitely write some catch that deletes a temporary variable at that point where its currently “owned".
unique_ptr just takes care of that automatically.

First, my original argument was not smart pointers are bad. My code is full of them, and I've been using them almost as long as I've been using C++, which is over 30 years. There is some overhead which is not a big deal. If you are just using them inside a class that's one thing. But once you start using them elsewhere, or passing around their data, you end up significantly changing some things. Let's stick to unique pointers. You can't pass them down to a function directly. You could pass a reference to them but now you are dealing with another level of indirection every time you use them. And of course you can't copy them freely either. A GC handles all this without the programmer thinking about it, albeit with some performance penalty in many cases.


So its not about having bugs or not, its about how easy is to have bugs VS avoiding them automatically.

Unless you use them in a pretty restricted way, I don't think you are avoiding much automatically. Let's take your example. You have 3 Person objects. As long as you reference them through your person vector you are fine. But if you get one Person object and pass it around, you either have to risk dangling pointers or suffer one level of dereferencing every time you access them, since to be safe you would use a reference to your smart pointer. Maybe that's OK, or maybe not. But if you only learned C++ at a cursory level, you are not equipped to make a good choice. Worse yet (and I have seen this), programmers attempt to pass ownership around through every function call by using std::move, because someone convinced them that all pointers should be owning pointers to avoid the slightest risk of a bug. This is of course at the cost of truly ugly confusing code.

So again I'm not against using unique_ptr, but I think you need to understand what it's doing so you can use it wisely.

I don't seen an issue here. unique_ptr doesn't strive to solve all problems related to when you pass pointers/references to another function. It just replaces all your new/deletes. Thats it. The rest of your application can stay the same. Yes, that means it doesn't remove the danger of dangling references, but thats ok. I don't think its intended to do that. At least I don't treat it like this. It removes a lot of bugs and overhead associated with manual memory managment, while keeping the performance/freedom of being able to just pass raw pointers around.

I'm going to submit that dangling pointers cause far more bugs than forgetting to free something. Sure, that's a memory leak, but there are even tools to find that. Dangling pointers are not as easy to track down and are just as serious, worse I'd say.

C++ does have one main appeal though, which I appreciate a lot: Zero-overhead abstractions. Yeah, things rarely are actually completely zero-overhead, but things like std::unique_ptr are as close as it gets. There is like one additional instruction for non-inlined sink-functions (where you need to transfer ownership), and actually no overhead at all when using the object stored in std::unique_ptr vs the raw pointer. So performance in this regard is a non-issue.

Again I don't' really have an issue with unique_ptr, it's mainly with shared_ptr and that's mainly because of the implementation. I have my own version of it unique_ptr, but that's because my library has had it for years.

There's two main gripes I have with this:

a) At point do we draw the line? If what you say is true then even using oldschool-c++ still does not give you the full picture. Unless you have learned ASM and looked at what the compiler procudes and how that code is executed by the CPU, you don't fully understand whats going on in your code. But, I'm not saying you should need to learn it. My point is rather, it doesn't matter to the average user. Especially a beginner.

A beginner won't be a beginner forever. If you teach them things like “use shared_ptr everywhere”, you are teaching them something that can and likely will, break their code at some point. Even if it doesn't, they will incur reference counts all over the place. If you learn C++ bottom up that will stick in your head and that allows you to make design decisions accordingly, even if you are using modern C++. You don't need to understand the ins and outs of every architecture, but most computers are similar these days so understanding things on a C level is pretty beneficial.

b) It's also a matter of the amount of information you can realistically learn. Nobody is going to learn c++ and understand everything. In fact, I've seen lots of beginners in every language just try to find examples that closesly fit what they want to do and copy/paste and modify it. Why is that? Its because they are overwhealmed. And no, thats not overwhealmed by modern features (the same thing happened back in the day), its overwhealmed by how complicated simple things like a loop are. Nobody is going to understand what that vector-iterator loop does. when they first learn c++, nobody is even going to remember how to write it. Nobody will exaclty understand the intrinsics of dangling pointers, memory-managment etc… in the first program they write, just because they have to new/delete everything themselves.

Thats my main point. I agree that it helps in c++ to know details, especially as an intermediate/expert, but you have to walk first to be able to run. And a lot of the modern tools help you to do that. But then again, you belive that this makes it harder to work with the language which I don't think is the case at all.

I guess my argument to that is, I started learning C++ in the late 80s early 90s. At that time, it was just another language. People came in from C, Fortran etc. Nobody really complained about how hard it was and if I was asked to help debug something it was typically a reasonably difficult bug to find. Now I see constant cries for help even for things that should be pretty simple. Modern C++ is an attempt the tack wiz bang feature on and inherently low-level language. That works well enough if you understand it at least at the C level. But many people don't. And I can see the difference because I have tutored a lot of people, past and present.

fleabay said:

I never knew that ‘rathole’ was synonymous with ‘worthless’. ( in the context of what this thread is supposed to be )

Well the OP hasn't complained and everyone else is free to ignore it, so I don't see an issue. This is pretty much par for the course for many threads. Also, it hasn't reached the point of name calling yet :-)

please close this thread I am getting too much commentary

Yes, it did go down a rathole. Closing.

-- Tom Sloper -- sloperama.com

This topic is closed to new replies.

Advertisement