• Create Account

## unique_ptr, shared_ptr, weak_ptr best practices

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

14 replies to this topic

### #1Quat  Members

Posted 28 November 2013 - 12:35 AM

One issue I'm having with these smart pointers is the following.  Suppose I have some asset manager class like a texture manager.  I want this manager to be responsible to load textures on demand when needed, and also responsible for freeing the memory.  So it seems like unique_ptr makes sense.  However, many other classes in the application will need a texture pointer at some time.  For example, a RenderItem might need to know the textures to bind for a draw call, or a Sprite might need to know its texture.

I can't give RenderItem or Sprite a copy of unique_ptr, so this means I have to use shared_ptr.  But what I don't like about shared pointer in this case is that I really only want the manager to create or delete textures.  If I have shared_ptrs around, I can't be sure when all of them get destroyed--it won't necessarily be when the manager gets destoyed.

I could use shared_ptr in the manager and then weak_ptr on RenderItem and Sprite, but that feels like a workaround.  I don't really want reference counting at all.

I also though about just using naked pointers in RenderItem and Sprite, with the understanding that the manager owns them.  But this seems to break one of the purposes of smart pointers, which is that the code makes ownership clear rather than having to have assumptions or knowledge about the system.

What do you think is the best approach for modern C++?

-----Quat

### #2cdoubleplusgood  Members

Posted 28 November 2013 - 02:55 AM

I also though about just using naked pointers in RenderItem and Sprite, with the understanding that the manager owns them. But this seems to break one of the purposes of smart pointers, which is that the code makes ownership clear rather than having to have assumptions or knowledge about the system.

But this is exactly how ownership is expressed when using smart pointers: Each module having a unique_ptr or shared_ptr has (shared) ownership. Modules having a reference or a raw pointer use the object, but they do not control their lifetime.

However, I would use references, not pointers, unless nullptr is a valid value.

http://herbsutter.com/gotw/

### #3dmatter  Members

Posted 28 November 2013 - 03:47 AM

If I have shared_ptrs around, I can't be sure when all of them get destroyed--it won't necessarily be when the manager gets destoyed.

That sounds backwards to me - The reference counting nature of shared_ptrs means you can know when all other references have been lost: Once your manager holds the only shared_ptr then the reference count will equal 1. (See the use_count() function).

You could also have the manager hold a weak_ptr but have it return shared_ptrs. If you use a custom-deleter then once all the external shared_ptr references are dead then your custom-deleter can signal back to the manager to release the resource.

There might also be a case for passing around references or raw-pointers outside the manager, since these indicate non-ownership semantics which is exactly what you're trying to achieve. The downside is that the lack of reference-counting means the manager can't track external usage; even if your manager isn't going to immediately delete unused textures and only remove them when the manger itself dies it can be extremely useful if it can indicate whether some textures have not been correctly released by the rest of the system.

### #4Strewya  Members

Posted 28 November 2013 - 04:17 AM

Seems to me like the objects using the textures should have a smaller lifetime than the texture manager.

In that case, the manager will/should be destroyed after all of the objects have been destroyed, and thus, you know that nobody is using the textures anymore.

This way, you could store unique_ptrs in a vector, and simply hand out integer handles (index of the texture in the vector) or naked pointers to users. It takes a bit more thought into structuring the data and code, but IMO that's a good thing.

devstropo.blogspot.com - Random stuff about my gamedev hobby

### #5L. Spiro  Members

Posted 28 November 2013 - 06:25 AM

POPULAR

Once upon a time there was an L. Spiro.
One day this little L. Spiro was walking harmlessly through the office, when suddenly, sneak attack!
“You WILL optimize Phantasy Star Nova,” a voice commanded.
The poor helpless L. Spiro replied, “B-b-but, it looks haaaard…  Look at how complex some of the scenes are…”
“SILENCE!  YOU WILL DOUBLE ITS FRAMERATE OR YOU WILL NEVER SEE YOUR FAVORITE STUFFED BUNNY AGAIN!!”

The poor defenseless and innocent L. Spiro sat at his desk and looked for things to optimize.  “It looks pretty fast already, what can I do?”, he thought.

But profiling showed massive uses of memory allocations and freeing.
Like on this line:

LookTarget_ptr pLookAt = CUtil::TargetListCurrent().IsNull() ? NULL : CUtil::TargetListCurrent().GetSharedPtr();

And this one:

LookTarget_ptr pNextTarget;

L. Spiro was puzzled but then he remembered stories from his grandmother who told of the evils of using std::vector, std::shared_ptr, etc., in games. “Beware the shared_ptr that points to nothing, for it too must allocate a reference counter.”

“That’s right! Grandmother was preparing me for this day all along!”, he exclaimed.
“When these shared pointers get assigned to either default constructor or NULL, they allocate a reference counter and then free it when scope ends since they are just temporaries.”

And with memory usage being the biggest bottleneck in the entire game, by fixing all of the shared_ptr usages, L. Spiro was able to improve the performance significantly, and all rejoiced. But it was only 196% of its original performance and L. Spiro’s bunny was destroyed.

The End.

What do you think is the best approach for modern C++?

If we are talking about games, my advice would be: Don’t use std::shared_ptr.  The only good thing about std::shared_ptr in games is that when you hand your project off to the optimizer guy, he’s going to look like a hero with all the performance he will be able to get back.

In your situation, you don’t even necessarily need shared pointers anyway.  We have a texture manager and objects hold references to textures.  If the object is not deleted before the texture manager, crash and fix it.  When it’s assumed the manager itself must outlive objects that hold references to pointers to textures, it is not your problem if others try to break that rule.  Besides, in our case the texture manager lasts the lifetime of the game so it really can’t be outlived by any objects that use textures.

If you ever do need shared pointers, there is no harm in rolling your own.

To avoid unexpected allocations mine uses an intrusive model in which no allocations are made until you call CSharedPtr<T>::New().  The shared pointer template class will allocate both the object and the reference counter together, at a predictable time, and with only 1 allocation instead of 2.

And that’s not even mentioning all the atomic increments and decrements needed by std::shared_ptr for thread safety.

In my world, there is a shared pointer and then there is a thread-safe shared pointer.  Use as necessary.

I would avoid std::vector for similar “not made for games” reasons.

L. Spiro

Edited by L. Spiro, 28 November 2013 - 06:27 AM.

### #6Bregma  Members

Posted 28 November 2013 - 06:28 AM

POPULAR

Smart pointers are not an excuse to be ignorant about memory ownership issues.

In this case, you say up from that there is to be a single, unique owner of the allocated memory:  the texture manager.  That's a strong design, and the way that should be done in C++ is std::unique_ptr (notice the way the name matches the purpose).  The texture manager owns the memory using a std::unique_ptr, and allows weak references to the textures using raw pointers.

There's nothing wrong with raw pointers in C++ as long as you've thought out the ownership issues.  Seeing new and delete in code (outside of the std::unique_ptr constructor, at least until std::make_unique() appears in the standard) is a sign you may not have though through ownership.

Stephen M. Webb
Professional Free Software Developer

### #7SeanMiddleditch  Members

Posted 28 November 2013 - 06:12 PM

L. Spiro was puzzled but then he remembered stories from his grandmother who told of the evils of using std::vector, std::shared_ptr, etc., in games. “Beware the shared_ptr that points to nothing, for it too must allocate a reference counter.”

+1 because your post is quite similar to what I was going to write.  Don't use shared_ptr.

Just wanted to note that this particular quoted line is not true of any good std::shared_ptr implementation (I don't know about the one used in the Vita toolchain); a null shared_ptr can and should have both a null pointer to the data and a null pointer to its reference count (or a single null pointer if enable_shared_from_this or the like is used).  Memory is only allocated when needed and never during default construction; true for std::vector and the other standard containers, too, at least in every standard implementation I've seen in recent years.  There are a lot of custom types often called things like shared_ptr that behave quite oddly, though.  The engine we've been using on our current project has heavily used (unfortunately) shared_ptr and weak_ptr types that act _almost_ like but not identically to the C++11 std:: versions (and this confuses some of our junior devs who are familiar with C++11 and makes it hard to teach C++11 stuff to some of the more veteran developers since practice and theory aren't matching up).

I'd go further to mention that smart pointers of any type are a huge pain when debugging.  Since most compilers don't have something like GCC's -Og (optimize with debugging in mind) you are often stuck choosing between easy-to-debug builds with zero optimization or nightmare-to-debug builds with high optimization.  Since every smart pointer dereference is a function call you end up taking huge performance hits and suffer other annoyances (like when trying to step into code) in those non-optimized debug builds.

std::unique_ptr is a fantastic idea but those debugging issues bog it down, too.  It's unfortunate the C++ committee didn't add an owning pointer to the core language, a la Rust; maybe in the future (I'd wager it has a higher likelihood of happening sooner than Visual C++ gaining something like -Og does).

Game Developer, C++ Geek, Dragon Slayer - http://seanmiddleditch.com

C++ SG14 "Games & Low Latency" - Co-chair - public forums

Wargaming Seattle - Lead Server Engineer - We're hiring!

### #8Servant of the Lord  Members

Posted 28 November 2013 - 06:45 PM

My view on smart pointers.

It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames -

### #9King Mir  Members

Posted 29 November 2013 - 12:56 AM

If you have an asset manager then that class should ensure the lifetime of assets. So you don't need shared_ptr or unique pointer, except possibly in the internals of that manager. However, an asset manager should ideally not give out raw pointers either, but smarter handles to assets. That way you can provide safeguards to prevent access to freed resources, keep track of which resources are used, and not require the user to explicitly free resources unless they choose to.

You can also make the a transparent interface to the resource, essentially making it a resource object in it's own right, except with reference semantics.

As an alternate design, similar to what dmatter suggested, have a factory, that gives out shared pointers to resources, keeping track of existing resources with weak pointers. This is essentially the same as above, except your using shared pointers explicitly instead of wrapping them or similar functionality in a handle. This is faster to implement then writing safe handles yourself, but makes it harder to change the design later.

Edited by King Mir, 29 November 2013 - 01:01 AM.

### #10AndyPandyV2  Members

Posted 29 November 2013 - 12:59 AM

“When these shared pointers get assigned to either default constructor or NULL, they allocate a reference counter and then free it when scope ends since they are just temporaries.”

Not true, they allocate nothing until used  Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

Though I agree that in general to avoid shared_ptr, or use as a a last resort, the vast majority of the type value semantics or unique_ptr are the right choose.

I would avoid std::vector for similar “not made for games” reasons.

Also not true. A legitimate complaint against vector was that in C++03 its allocator model was stateless, which frankly sucked.  This is fixed in C++11, stateful allocators are fully supported, and there is no reason at all not to use vector.

### #11BitMaster  Members

Posted 29 November 2013 - 01:52 AM

“When these shared pointers get assigned to either default constructor or NULL, they allocate a reference counter and then free it when scope ends since they are just temporaries.”[/size]

Not true, they allocate nothing until used  Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

Actually Spiro's statement was half correct. When you default construct a boost::shared_ptr, no additional control structure is allocated. When you construct it with nullptr, a control structure is indeed allocated, and the deleter is called on that pointer. That is usually not a problem (because 'delete nullptr;' is safe), but calling some custom deleters is not safe (like GDALClose).
I haven't checked that with std::shared_ptr but I would assume their semantics are identical with boost::shared_ptr.

### #12L. Spiro  Members

Posted 29 November 2013 - 04:00 AM

Not true, they allocate nothing until used  Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

Actually Spiro's statement was half correct. When you default construct a boost::shared_ptr, no additional control structure is allocated. When you construct it with nullptr, a control structure is indeed allocated, and the deleter is called on that pointer.

Actually we are all entirely correct (except for one reference to how things are supposed to work).
It’s up to the implementation, which is one of the scariest things about it and one of the biggest reasons to stay away (that and the atomic handling of the reference counters).

Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

Actually, it’s the most recent version of the PlayStation Vita SDK, with C++11 support, and is what I am currently using, because what choice do I have?
But the standard doesn’t define implementation details, so there is no real how it is supposed to work, other than by conforming to the functionality specified by the standard.

So it’s reason enough to stay away from shared_ptr in general just because you don’t have guarantees on how it is implemented, but there is one extremely major issue in terms of performance that is true for all implementations (as the standard specifies that multi-threaded access to a shared pointer is built-in) which is also cause to stay away: Atomic reference counting.

That article does not discuss the allocation overhead, but the overhead in atomic increments and decrements necessary for a standard-compliant shared_ptr to adhere to the, “All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object,” specification.

All-in-all you don’t really win with std::shared_ptr or boost::shared_ptr.

It’s better to roll 2 of your own (one for single-threaded access and one for multi-threaded access) so that:

1. You are sure of the implementation details.
2. You don’t waste cycles on atomic operations when not necessary.

L. Spiro

### #13King Mir  Members

Posted 29 November 2013 - 02:27 PM

I think L. Spiro has a point that shared pointers can be imperfectly tuned to particular uses of them, which do not use the full gamut of features a shared pointer allows. If you don't need all of shared pointer's features, and the code is performance critical, then rolling your own smart pointer may be an optimization.

### #14wintertime  Members

Posted 30 November 2013 - 07:52 AM

The simpler alternative is to not share ownership, use unique_ptr in the cache and only give out normal references that are documented to be single use and not to be saved for longer. That avoids everyone creating their own shared pointers, which are quirky, possibly bugged, more easily misused or possibly even worse than the standard one.

### #15King Mir  Members

Posted 01 December 2013 - 02:51 AM

The simpler alternative is to not share ownership, use unique_ptr in the cache and only give out normal references that are documented to be single use and not to be saved for longer. That avoids everyone creating their own shared pointers, which are quirky, possibly bugged, more easily misused or possibly even worse than the standard one.

That would mean you could never release resources, except perhaps on the manager's destruction, because you have no way of knowing when a managed object is no longer in use. It would be more flexible to have a custom handle to the resource.

Also, if you're going to the trouble to write a resource manager, it may make more sense to make a memory pool than using an array of unique pointers pointing to disparate memory.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.