design question, meshes and buffers

Started by
11 comments, last by cozzie 7 years, 4 months ago

Hi all,

I'm struggling a bit on design related thing in my new d3d11 engine.

Here's the situation:

- I have a low level renderer (d3d11)

- meshes are stored in a mesh class

- there's an IBuffer class (interface), which basically links api-independent mesh class to the d3d11 low level renderer

(IBuffer contains the D3D11 buffer, used for vertices or indices)

So far I've thought of two options how to brings this together:

1. Give the mesh class 2 IBuffer's as members, one for the vtx buffer and one for the indexbuffer

2. Let the low-level renderer have a meshbuffer manager, which stores all the IBuffers.

In this case, the manager returns an ID/handle for each buffer, which is stored in the mesh object (mVtxBufferId and mIndexBufferId)

But solutions come with advantages and disadvantages.

I'm leaning towards option 1, where I encounter something practical:

- the IBuffer class has a 'Create' function, which needs a template of const * data (can be either vertices or indices, uint's or some struct)

(because of the template I cannot create the buffer through the constructor, using a template)

- the Create function also takes the D3D11 device pointer (needed for creation)

I could solve this by one of these options:

1. Make the IBuffers in the Mesh public, so I can call the Create function from outside the mesh class

(keeping the mesh API independent, otherwise I would need to make a CreateBuffers function in the API independent mesh class, taking a D3D11 device ptr)

2. Accept that an API independent class (mesh), needs a D3D11 device ptr (to create buffers). I really don't like this option btw.

I'm curious how you would solve this and/or if you could give me some 'pointers' on how to continue.

Btw, option 2 works for sure (I've already implemented this), but it doesn't feel clean because I cannot create the buffer through the constructor (because of the template function). Also option 2 has the disadvantage that a mesh has to know the ID of the buffer(s) in the manager. Meaning that when you 'kill' a buffer in the manager, you cannot reshuffle the std::vector, otherwise the buffer ID's in the mesh object won't match up anymore. But I already figured out that I can use a 'freelist' for this, to be able to fill up the 'gaps' in the vector for released IBuffers.

Last but not least, I could also consider some sort of 'factory' pattern, where IBuffers are returned. This would be in line with option 2 above.

Any input is appreciated.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement

typically this is solved with an abstract device interface.

Create an IDevice class that has virtual functions to create/destroy ITexture/IBuffer/IShader/... objects

Each rendering API can implement them as it sees fit while the high level code can work on abstract handles.

The renderer will also work on those abstract handles and only do the translation to API specific resources when actually submitting the commands to the specific API.

Note that the IDevice implementation can still use some sort of buffermanager (for example to provide suballocations into larger buffers) as long as it returns an abstract interface

Thanks. Would that mean that I would pass the abstract device to the mesh class in the renderer system? And that the abstract device interface internally points to the API's (D3D11 in this case) device?

I'm still in doubt if this isn't a bridge too far for me, for the current stage I'm in (leaning, playing around).

I cannot imagine how this would look in (pseudo) code.

If this is the case, what would you think will be the best of the 2 options above? (or with some tweaking a option '3')

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

So, this is what I do.. and most Game Engines/AAA game engines that I've used or seen do. Like I said in your previous post, abstract the lower end device. It shouldn't be that difficult, just google "ID3D11Device", and create a function for each one of the CreateXXX functions you find. Then create an "IGFXContext", copy all of the functions that you see in "ID3D11DeviceContext" that has to do with rendering, i.e create a "SetVertexBuffer" function.. and etc. There should only be one "IDevice" class in the game, so make it global if you have to.. but get it done. The mesh class will handle the creation of the buffers... so it will most likely have a function called "CreateResources" where it calls to the "IGFXDevice" class to create all of the buffers that it needs. Abstract. Abstract. Abstract. an ID3DBuffer does not have a function to create itself, so your api should not have one either. There is no point in trying to separate responsibility of your resource classes when D3D already has a perfect level of separation. The Device class is used for resource creation, the DeviceContext is used for rendering.

OK, I think I'm getting the point.

- in this case I would have a mesh and a device class, both API independent

- it's no problem to pass this device to the mesh creation function

- in the underlying implementation of the API independent/ abstract device class, the real ID3D11 device is used

-- CDevice has a member ptr to the D3D11 device

-- CDevice has a member function CreateBuffer, which underneath uses the D3D11 device ptr to create the D3D11 buffer (within the IBuffer)

-- In the constructor of the CDevice I can pass the D3DDevice ptr (make it a std::unique_ptr so I can 'manually' create the object with the constructor)

Basically enabling that when you need another API, you only have to change the abstract classes: in this case the device class and the IBuffer class.

Is this a good 'monkeyproof' explanation of what you mean?

When I do this, I could easily go for option 2 in the initial question, with the addition that the mesh class would need a public function to retrieve the 2 IBuffer (their ptr's), because I don't think I want to make the 2 IBuffer members of the mesh public.

One last question; where in the design would you place the abstract classes for the Device and the Device context?

(I would say within the renderer system, blue part of the system overview)

Am I thinking in the right direction?

crealysm11_design_v0.6.jpg

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Yeah, that's the basic of the implementation class. If it exists in the D3D API , and you need it. Create a class for it. so this extends to textures... render target views.. ... constant buffers.. structured buffers.. and etc. The D3D11/D3D12 API is a perfect example of how things should be abstracted, in my opinion, and all you have to do is create a wrapper class over each one you need. the Device and Device context should exist in the "RenderSystem" , As should all of your resources used for rendering. As to what you want to do with your mesh class "receiving buffers" through functions, I would persuade you to instead allow your mesh to make the requests for buffers. This abstracts your mesh even more, and makes it future proof.. when you advance your engine even more.. you may add multiple types of vertex and index buffers, i.e some for depth only passes.. etc. And you would then have to change all of your functions that give the mesh the buffers. So keep the meshes buffers "private" .. but have a public "CreateResources" function that will handle the creation of the meshes buffers.

Thanks.

Regarding this point: "instead allow your mesh to make the requests for buffers".

I can imagine doing this the following way:

- when creating a mesh (constructor) I pass a pointer to the abstract buffermanager

- this is stored in the mesh class object

- when a mesh is created (vertices, indices), I can directly call the buffermanager to create the buffers for this mesh

(meaning I won't forget it, compared to doing it manual, by the caller)

Is this what you mean?

If you, this would mean I will keep/ have a MeshBufferMgr in my low level renderer system (option 2 in the 1st post of the thread).

This would mean though, that in all mesh objects I have a pointer stored to the buffermanager.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Well, first I would like to ask you two questions. First what is the purpose of your buffermanager class... and secondly.. why would your mesh knowing about the buffer manager mean that you would have to create the mesh buffer manager in the low level renderer system? . All your mesh wants is vertex and index buffers... it doesn't matter if the buffers are allocated immediately, or from a manager.. it just wants them. Nothing in a mesh states that it has to know how if the vertex and index buffers have an overarching manager class. It makes a call to GDevice->CreateVertexBuffer(...) and expects a result. That is the extent of it's knowledge. If you want to have a buffer manager class, i personally don't see the point of it. then when you call CreateVertexBuffer..

have your CD3D11Device create the vertex buffer , register it with the manager, and return it.

It works like a charm :)

I've implemented the IGPUDevice already, next step is the context.

I aim that I don't have to include any d3d11 or d3d11 low renderer headers in my main application (on top/ frontend of the engine).

Abstraction is cool but I have to draw a line somewhere I guess.

- ConstantBuffer -> for sure, same as MeshBuffer

- RenderTarget -> perhaps, although the DXGI_FORMAT might be D3D11 only which makes the abstraction a bit different (I would need to pass a DXGI_FORMAT to the abstracted RT class)

- RenderStates and SamplerStates: I won't abstract them for now, perhaps later on when I manage them different (aka better). Now there's just globals within the main application (not that far yet).

Passing the D3D samplerDesc to a abstracted samplerstate doesn't sound usefull.

Here's some code snippets.

Thanks again for the advice and helping me in the right direction.


// IRenderer, will also have the IGPUContext when it's done

class IRenderer
{
public:
	IRenderer(ID3D11Device *pDevice);
	~IRenderer();

	IGPUDevice* GetIDevice()		const;

private:
	std::unique_ptr<IGPUDevice>		mDevice;
};

IRenderer::IRenderer(ID3D11Device *pDevice)
{	
	mDevice = std::make_unique<IGPUDevice>(pDevice);
}

IGPUDevice* IRenderer::GetIDevice() const
{
	return mDevice.get();
}

// IGPUDevice

class IGPUDevice
{
public:
	IGPUDevice(ID3D11Device *pDevice);
	~IGPUDevice();

	ID3D11Device*	GetDevicePtr()	const;

	bool CreateBuffer(const D3D11_BUFFER_DESC *pDesc, const D3D11_SUBRESOURCE_DATA *pInitialData, ID3D11Buffer **ppBuffer);
	bool CreateRasterizerState(const D3D11_RASTERIZER_DESC *pRasterizerDesc, ID3D11RasterizerState **ppRasterizerState);
	bool CreateSamplerState(const D3D11_SAMPLER_DESC *pSamplerDesc, ID3D11SamplerState **ppSamplerState);

private:
	CComPtr<ID3D11Device>	mD3DDevice;
};

IGPUDevice::IGPUDevice(ID3D11Device *pDevice) : mD3DDevice(pDevice)
{	
	// nothing to do, CComPtr = nullptr by construction
}

// and just one of the abstracted functions

bool IGPUDevice::CreateBuffer(const D3D11_BUFFER_DESC *pDesc, const D3D11_SUBRESOURCE_DATA *pInitialData, ID3D11Buffer **ppBuffer)
{
	if(FAILED(mD3DDevice->CreateBuffer(pDesc, pInitialData, ppBuffer))) return false;
	return true;
}

For now, the mesh class of the renderer system simply has 2 IBuffers, and creation of a simple box is done like this:


bool C3dMesh::CreateBox(IGPUDevice *pDevice, const float pWidth, const float pHeight, const float pDepth)
{
	CGeometryGenerator::MeshData box;

	CGeometryGenerator geoGen;
	geoGen.CreateBox(1.0f, 1.0f, 1.0f, box);

	std::vector<Vertex::PosNormTex> tVertices(box.Vertices.size());
	mVertexByteSize = sizeof(Vertex::PosNormTex);

	for(size_t i=0;i<box.Vertices.size();++i)
	{
		tVertices[i].Position	= box.Vertices[i].Position;
		tVertices[i].Normal		= box.Vertices[i].Normal;
		tVertices[i].Texcoord	= box.Vertices[i].Texcoord;
	}

	if(!mVtxBuffer.Create(pDevice->GetDevicePtr(), VERTEX_BUFFER, tVertices, false, false)) return false;
	if(!mIndexBuffer.Create(pDevice->GetDevicePtr(), INDEX_BUFFER, box.Indices, false, false)) return false;

	mNrVertices = box.Vertices.size();
	mNrIndices = box.Indices.size();
	mNrFaces = mNrIndices / 3;

	return true;
}

I might want to change the IBuffer creation, maybe through the constructor or have some function return the created IBuffer.

Don't know how to do that yet.

Ofcourse, any additional input is appreciated :cool:

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

You're definitely on the right track, but you haven't correctly provided abstraction.. the IGPUDevice should not know about the device, and neither should the IRenderer... I'll give you an example :



enum EBufferFlags
{
 BUF_Dynamic, 
 BUF_VertexBuffer,
 BUF_IndexBuffer,
 ...
};

class IBuffer
{
 public : 
  virtual ~IBuffer(){}
  virtual uint32 GetSize() const = 0;  
  virtual uint32 GetFlags() const = 0;
};

class IGPUDevice
{
public:
   virtual ~IGPUDevice(){}

   virtual IBuffer* CreateVertexBuffer(const void* Data, uint32 Size, uint32 Flags) = 0;

};

... now somewhere in your d3d11 module
class CD3D11Buffer : public IBuffer
{
public:
   CD3D11Buffer(ID3D11Buffer* Resource, uint32 Size, uint32 Flags) :
      m_Resource(Resource),
     m_Size(Size),
     m_Flags(Flags)
{

}

  uint32 GetSize() const override { return m_Size;}
  uint32 GetFlags() const override { return m_Flags;}
  ID3D11Buffer* GetResource() const { return m_Resource;}

private:
  uint32 m_Size;
  uint32 m_Flags;
  ComPtr<ID3D11Buffer> m_Resource;


};


class CD3D11Device : public IGPUDevice
{
IBuffer* CreateVertexBuffer(const void* Data, uint32 Size, uint32 Flags) override
{
     ..create the d3d11 buffer using device
     ID3D11Buffer* Buffer = ...;

   //now return the concrete instance 
   return new CD3D11Buffer(Buffer,Size,Flags);
    
}

  
  //the d3d11 device 
  ID3D11Device*  m_Device;

};



//global instance 
IGPUDevice* g_pDevice;

void InitGraphics()
{
  ... this is where you will choose the proper api during engine start up
    if(GUseD3D11)
   {
     g_pDevice = new CD3D11Device();
 } 

  //initialize the device
  g_pDevice->Init();

}


//example of mesh class
class CMesh
{
public:

  void CreateResources()
{
   m_VertexBuffer = g_pDevice->CreateVertexBuffer(&Vertices[0],Vertices.size()*sizeof(CVector3), BUF_VertexBuffer);
   m_IndexBuffer = g_pDevice->createIndexBuffer(&m_Indices[0], Indices.size() * sizeof(uint32), BUF_IndexBuffer);
}
 
void SetVertices(...) // this is where you'll set internal data
void SetIndices(..)

private:

 std::vector<CVector3> Vertices;
 std::vector<uint32> Indices;
IBuffer* m_VertexBuffer;
IBuffer* m_IndexBuffer;

};


//now lets say you have multiple global geometry functions
std::vector<CVEctor3> Verts;
std::vector<uint32> Indices;
GeometryUtil::CreateBox(Verts,Indices,...);

//now create the mesh 
CMesh* pMesh = new CMesh();

//set the data 
pMesh->SetVertices(Verts);
pMesh->SetIndices(Indices);

//create the graphics resources 
pMesh->CreateResources();



This topic is closed to new replies.

Advertisement