Opinions on standard library <memory> use

Started by
10 comments, last by cardinal 10 years, 6 months ago

Hi Guys,

I'm a game programmer enthusiast working on my own engine and I've been musing over using the standard library smart pointers in certain places in the engine. Generally, I prefer to use stack memory allocation for most things but I've found that certain aspects, such as the Vertex array in my Mesh class, are easier to allocate dynamically. For example, my Mesh class signature:


class Mesh
{
	Vertex*	m_pVerts;
	string*	m_pMeshObjectNames;
	D3D11_PRIMITIVE_TOPOLOGY* m_pTopologies;
	uint32_t* m_pReferenceCount;
	uint32_t m_NumVerts;
	bool m_HasNormalsDefined;
	bool m_HasTexCoordsDefined;

	void _LoadOBJMesh(LPSTR);

public:

	Mesh();

	Mesh(const Mesh&);

	Mesh& operator=(const Mesh&);

	~Mesh();

	void LoadMesh(LPSTR, MeshType);

	void LoadMeshAsync(LPSTR, MeshType);
};

The m_pMeshObjectNames and m_pTopologies are intended to point to dynamic arrays of their respective types however at the moment I haven't fully implemented that. Anyway my focus is the m_pVerts member. Originally, I implemented the pointer members of this class as std::shared_ptr<TYPE[]> and ran into the trouble using the [] partial specialization. I've also been wary of using the smart pointers in the standard library mostly due to lack of a full bodied understanding of how they work on my part. My implementation now handles the destruction of these dynamic members via reference counting. When I assign or use the copy constructor, the pointers are directly copied and then the reference counter is incremented. Finally, those members do not actually delete[] until the reference count has reached 0. Here is my destructor:


Mesh::~Mesh()
{
	if (m_pReferenceCount && !--(*m_pReferenceCount))
	{
		if (m_pVerts)
		{
			delete[] m_pVerts;
			m_pVerts = nullptr;
		}

		if (m_pMeshObjectNames)
		{
			delete[] m_pMeshObjectNames;
			m_pMeshObjectNames = nullptr;
		}

		if (m_pTopologies)
		{
			delete[] m_pTopologies;
			m_pTopologies = nullptr;
		}

		if (m_pReferenceCount)
		{
			delete m_pReferenceCount;
			m_pReferenceCount = nullptr;
		}
	}
}

This code works as intended and I am pleased with it, however I don't know if it would be a widely accepted implementation. My question is: is this kind of code typically frowned upon in the industry? Do industry professionals normally like to use the standard library (or other respected library) smart pointers? I know these are probably very subjective questions, I just want some opinions!

Thanks!

Matt

Advertisement

For m_pMeshObjectNames, why not a vector of strings?

Rather than reference counting, you can use shared pointers and weak pointers to contain and share access to mesh objects.

Use a unique pointer for your vert data.

Yes, industry professionals use STL smart pointers. The dogged persistence of many pro game coders on this forum was actually the main reason that I started using them. Once you get familiar with their use, they actually make things a lot easier, by saving you a lot of coding and a lot of debugging.

My question to you is: Do you have a reason for not using smart pointers?

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

Well I actually do use unique_ptr[s] and don't have a problem with those. shared_ptr[s] however I tend to have issues with. Specifically, the partial specialization used for new[] allocations. I don't fully understand the class and therefore avoid it. I have also read articles by many programmers who offer dislike toward the shared_ptr (including Bjarn Stroustrup). Do you know if there are specific platform requirements for using smart pointers? For example, if you develop with the intent to publish on the Xbox, will Microsoft enforce you to use smart pointers everywhere in your code base?

First parties (Microsoft/Sony/etc.) won't enforce the use of smart pointers. And in general they don't have access to your code.

I use a combination of pointer types (smart or otherwise). I wouldn't generally roll my own reference counting though.

I'm in the process of migrating to use smart pointers.

My main grief with them is verbose syntax, but a few typedefs and auto help with that.

There isn't really any good reason to not use them, afaik, but they do take some time getting used to.

But in your case for the Verts, how about simply using a std::vector instead? no need for neither smart pointer nor new[], do a resize() if you know ahead of time how many entries you need, to allocate all memory in one go, just as you would with new[]

Edit1: Also second that you should not roll your own reference counting. That is what std::shared_ptr is for.

Edit2: Checking for nullptr before delete is unecessary, and so is setting the pointers to nullptr in the destructor. (noone should access them after anyhow) And if you use proper containers like std::vector and smart pointers for everything, you will not need an explicit destructor in Mesh at all.

I have also read articles by many programmers who offer dislike toward the shared_ptr (including Bjarn Stroustrup).

What Stroustrup dislikes is shared ownership, not shared_ptr per se. So shared_ptr is the symptom of the problem rather than the problem itself. He feels that if you have shared ownership you might as well use shared_ptr, but it's better not to have shared ownership in the first place.

@cardinal: Thanks for the reply man I have wondered that.

@Olof: I have purposely written it this way as an exercise of how to implement simple heap sharing across copied instantiations of the same object. Also I am not completely sold on the standard library collections. I am working as an enthusiast with the time to run myself through exercises. I am mainly looking for opinions from professionals in the industry on whether or not they use the standard library heavily.

@SiCrane: So a design choice where shared ownership is a side effect is a bad design choice basically?

I wouldn't say it's necessarily bad, but shared ownership is used in a lot of cases where it's not needed, especially now in C++11 where move semantics mean that things like unique_ptr can be used where shared_ptr was in older code. Using it when it's not needed is a bad design choice.

Well, I am a professional in the industry, and my opinion is that the standard library is pretty awesome, at least now with C++11, and I use it as much as I can.

Saves me tons of time.

Wasn't too fond of it before, but move semantics and lambdas and the addition of auto is what sold it to me.

It's extremely versatile.

In <algorithm> you have the building blocks for most things you'd need to to with the collections, and I find my collection code becomes pretty brief (and thus easy to understand and maintain) if I use it correctly.

If you want to share the vertex buffers themselfs, I would still try to use shared_ptr for them.

Heres how it could be done:

http://stackoverflow.com/questions/13061979/shared-ptr-to-an-array-should-it-be-used

Personally, I'd probably write a wrapper class for the vertex array, since there is other attributes you might want to associate with it, like for example if it should use a dynamic or static vertex buffer on the GPU.

Then that class will manage the deletion of the array (that should be owned by a unique_ptr) and you share that class with a shared_ptr.

I definitely think shared ownership has its uses, but I can understand a computer scientist might think it slightly un-kosher...

Thank you for your posts guys! I am really appreciating the discussion. Olof this is exactly what I was looking for. You opinion with reasoning I can get behind. I do have a question about the collections in the standard library though. Is there any reason I should worry about performance when using a vector over a basic array?

This topic is closed to new replies.

Advertisement