• Advertisement
Sign in to follow this  

unique_ptr, shared_ptr, weak_ptr best practices

This topic is 1512 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

One issue I'm having with these smart pointers is the following.  Suppose I have some asset manager class like a texture manager.  I want this manager to be responsible to load textures on demand when needed, and also responsible for freeing the memory.  So it seems like unique_ptr makes sense.  However, many other classes in the application will need a texture pointer at some time.  For example, a RenderItem might need to know the textures to bind for a draw call, or a Sprite might need to know its texture. 

 

I can't give RenderItem or Sprite a copy of unique_ptr, so this means I have to use shared_ptr.  But what I don't like about shared pointer in this case is that I really only want the manager to create or delete textures.  If I have shared_ptrs around, I can't be sure when all of them get destroyed--it won't necessarily be when the manager gets destoyed. 

 

I could use shared_ptr in the manager and then weak_ptr on RenderItem and Sprite, but that feels like a workaround.  I don't really want reference counting at all. 

 

I also though about just using naked pointers in RenderItem and Sprite, with the understanding that the manager owns them.  But this seems to break one of the purposes of smart pointers, which is that the code makes ownership clear rather than having to have assumptions or knowledge about the system. 

 

What do you think is the best approach for modern C++?

 

 

 

 

Share this post


Link to post
Share on other sites
Advertisement


I also though about just using naked pointers in RenderItem and Sprite, with the understanding that the manager owns them. But this seems to break one of the purposes of smart pointers, which is that the code makes ownership clear rather than having to have assumptions or knowledge about the system.

But this is exactly how ownership is expressed when using smart pointers: Each module having a unique_ptr or shared_ptr has (shared) ownership. Modules having a reference or a raw pointer use the object, but they do not control their lifetime.

However, I would use references, not pointers, unless nullptr is a valid value.

Herb Sutter has some nice articles about this topic (see #89, #90, #91):

http://herbsutter.com/gotw/

Share this post


Link to post
Share on other sites

If I have shared_ptrs around, I can't be sure when all of them get destroyed--it won't necessarily be when the manager gets destoyed.

 

That sounds backwards to me - The reference counting nature of shared_ptrs means you can know when all other references have been lost: Once your manager holds the only shared_ptr then the reference count will equal 1. (See the use_count() function).

 

You could also have the manager hold a weak_ptr but have it return shared_ptrs. If you use a custom-deleter then once all the external shared_ptr references are dead then your custom-deleter can signal back to the manager to release the resource.
 
 
There might also be a case for passing around references or raw-pointers outside the manager, since these indicate non-ownership semantics which is exactly what you're trying to achieve. The downside is that the lack of reference-counting means the manager can't track external usage; even if your manager isn't going to immediately delete unused textures and only remove them when the manger itself dies it can be extremely useful if it can indicate whether some textures have not been correctly released by the rest of the system.

Share this post


Link to post
Share on other sites

Seems to me like the objects using the textures should have a smaller lifetime than the texture manager.

In that case, the manager will/should be destroyed after all of the objects have been destroyed, and thus, you know that nobody is using the textures anymore.

This way, you could store unique_ptrs in a vector, and simply hand out integer handles (index of the texture in the vector) or naked pointers to users. It takes a bit more thought into structuring the data and code, but IMO that's a good thing.

Share this post


Link to post
Share on other sites


L. Spiro was puzzled but then he remembered stories from his grandmother who told of the evils of using std::vector, std::shared_ptr, etc., in games. “Beware the shared_ptr that points to nothing, for it too must allocate a reference counter.”

 

+1 because your post is quite similar to what I was going to write.  Don't use shared_ptr.

 

Just wanted to note that this particular quoted line is not true of any good std::shared_ptr implementation (I don't know about the one used in the Vita toolchain); a null shared_ptr can and should have both a null pointer to the data and a null pointer to its reference count (or a single null pointer if enable_shared_from_this or the like is used).  Memory is only allocated when needed and never during default construction; true for std::vector and the other standard containers, too, at least in every standard implementation I've seen in recent years.  There are a lot of custom types often called things like shared_ptr that behave quite oddly, though.  The engine we've been using on our current project has heavily used (unfortunately) shared_ptr and weak_ptr types that act _almost_ like but not identically to the C++11 std:: versions (and this confuses some of our junior devs who are familiar with C++11 and makes it hard to teach C++11 stuff to some of the more veteran developers since practice and theory aren't matching up).

 

I'd go further to mention that smart pointers of any type are a huge pain when debugging.  Since most compilers don't have something like GCC's -Og (optimize with debugging in mind) you are often stuck choosing between easy-to-debug builds with zero optimization or nightmare-to-debug builds with high optimization.  Since every smart pointer dereference is a function call you end up taking huge performance hits and suffer other annoyances (like when trying to step into code) in those non-optimized debug builds.

 

std::unique_ptr is a fantastic idea but those debugging issues bog it down, too.  It's unfortunate the C++ committee didn't add an owning pointer to the core language, a la Rust; maybe in the future (I'd wager it has a higher likelihood of happening sooner than Visual C++ gaining something like -Og does).

Share this post


Link to post
Share on other sites
If you have an asset manager then that class should ensure the lifetime of assets. So you don't need shared_ptr or unique pointer, except possibly in the internals of that manager. However, an asset manager should ideally not give out raw pointers either, but smarter handles to assets. That way you can provide safeguards to prevent access to freed resources, keep track of which resources are used, and not require the user to explicitly free resources unless they choose to.

You can also make the a transparent interface to the resource, essentially making it a resource object in it's own right, except with reference semantics.

As an alternate design, similar to what dmatter suggested, have a factory, that gives out shared pointers to resources, keeping track of existing resources with weak pointers. This is essentially the same as above, except your using shared pointers explicitly instead of wrapping them or similar functionality in a handle. This is faster to implement then writing safe handles yourself, but makes it harder to change the design later. Edited by King Mir

Share this post


Link to post
Share on other sites

“When these shared pointers get assigned to either default constructor or NULL, they allocate a reference counter and then free it when scope ends since they are just temporaries.”

 

Not true, they allocate nothing until used  Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

 

 

Though I agree that in general to avoid shared_ptr, or use as a a last resort, the vast majority of the type value semantics or unique_ptr are the right choose. 

 

 

I would avoid std::vector for similar “not made for games” reasons.

 

Also not true. A legitimate complaint against vector was that in C++03 its allocator model was stateless, which frankly sucked.  This is fixed in C++11, stateful allocators are fully supported, and there is no reason at all not to use vector.

Share this post


Link to post
Share on other sites

[background=#fafbfc]“When these shared pointers get assigned to either default constructor or NULL, they allocate a reference counter and then free it when scope ends since they are just temporaries.”[/size][/background]

Not true, they allocate nothing until used  Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

Actually Spiro's statement was half correct. When you default construct a boost::shared_ptr, no additional control structure is allocated. When you construct it with nullptr, a control structure is indeed allocated, and the deleter is called on that pointer. That is usually not a problem (because 'delete nullptr;' is safe), but calling some custom deleters is not safe (like GDALClose).
I haven't checked that with std::shared_ptr but I would assume their semantics are identical with boost::shared_ptr.

Share this post


Link to post
Share on other sites

Not true, they allocate nothing until used  Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

 

Actually Spiro's statement was half correct. When you default construct a boost::shared_ptr, no additional control structure is allocated. When you construct it with nullptr, a control structure is indeed allocated, and the deleter is called on that pointer.


Actually we are all entirely correct (except for one reference to how things are supposed to work).
It’s up to the implementation, which is one of the scariest things about it and one of the biggest reasons to stay away (that and the atomic handling of the reference counters).



 

Whatever broken implementation you had might have done this, but it is not how it is supposed to work.

Actually, it’s the most recent version of the PlayStation Vita SDK, with C++11 support, and is what I am currently using, because what choice do I have?
But the standard doesn’t define implementation details, so there is no real how it is supposed to work, other than by conforming to the functionality specified by the standard.

 

So it’s reason enough to stay away from shared_ptr in general just because you don’t have guarantees on how it is implemented, but there is one extremely major issue in terms of performance that is true for all implementations (as the standard specifies that multi-threaded access to a shared pointer is built-in) which is also cause to stay away: Atomic reference counting.

Boost and std::shared_ptr are bottlenecks.

That article does not discuss the allocation overhead, but the overhead in atomic increments and decrements necessary for a standard-compliant shared_ptr to adhere to the, “All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object,” specification.

 

 

All-in-all you don’t really win with std::shared_ptr or boost::shared_ptr.

It’s better to roll 2 of your own (one for single-threaded access and one for multi-threaded access) so that:

  1. You are sure of the implementation details.
  2. You don’t waste cycles on atomic operations when not necessary.

 

 

L. Spiro

Share this post


Link to post
Share on other sites
I think L. Spiro has a point that shared pointers can be imperfectly tuned to particular uses of them, which do not use the full gamut of features a shared pointer allows. If you don't need all of shared pointer's features, and the code is performance critical, then rolling your own smart pointer may be an optimization.

Share this post


Link to post
Share on other sites

The simpler alternative is to not share ownership, use unique_ptr in the cache and only give out normal references that are documented to be single use and not to be saved for longer. That avoids everyone creating their own shared pointers, which are quirky, possibly bugged, more easily misused or possibly even worse than the standard one.

Share this post


Link to post
Share on other sites

The simpler alternative is to not share ownership, use unique_ptr in the cache and only give out normal references that are documented to be single use and not to be saved for longer. That avoids everyone creating their own shared pointers, which are quirky, possibly bugged, more easily misused or possibly even worse than the standard one.

That would mean you could never release resources, except perhaps on the manager's destruction, because you have no way of knowing when a managed object is no longer in use. It would be more flexible to have a custom handle to the resource.

Also, if you're going to the trouble to write a resource manager, it may make more sense to make a memory pool than using an array of unique pointers pointing to disparate memory.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement