Abstracting my renderer

Started by
28 comments, last by cozzie 7 years, 4 months ago

I prefere to abstract my render API because OpenGL runs on x86/64 OS but not on mobile, OpenGL ES is simillar but a different flavor in minor cases which need special handling and supporting XBox, Nintendo Wii or other devices use also different render API's. You can also switch between different Version of one API.

The code is pretty clean and short but on the other side you enter the support hell and you have to take some care of the compiled virtual functions.

virtual functions can be a problem but most cpu's can utilize indirect pointer resolution and branch prediction to find the right method in the vtable even before the call will be executed thanks to look ahead execution.

You will run in this problem if the execution of the previous commands don't take enought time, the branch prediction get messed by too many different branches with same priority level or the vtable is not in the cache(by requesting to many other memory blocks).

As phantom suggested you can also put the different implementation in seperate libraries and load the function pointers at runtime into the class/namespace.

I do this e.g. with core functions(cpu/gpu/OS dispatching) and is also pretty clean.

On top you can change your render code, compile and reload the function pointers at runtime if your asset managment system is designed for this.

If you plan a headless mode you got most of the work done for this as you have to handle assets without the gpu ressources.


using VSyncCallback = void(*)(bool);

class Renderer
{
public:
  Renderer():VSync(nullptr){}
  bool IsPresent()const{return VSync != nullptr;}
  bool Init(char* Driver){
    auto* lib = SharedLibrary::Load(Driver);
    if (lib != nullptr)
    {
      VSync = (VSyncCallback)lib->GetFunction("vsync");
    }
    SharedLibrary::Free(lib);
    return IsPresent();
  }
  VSyncCallback VSync;  
}

A couple years ago I were working on a project where we wrote smoke tests which captured a frame and test it with the reference picture.

This way we could ensure that our render code works fine on different systems.

From time to time you have to specialize the reference pictures for different gpus as amd, intel and nv have different results on filtering textures and alpha blending.

Back there we used R8G8B8 png but today I would use RGB float32 because of HDR.

Advertisement

Do you have an actual valid reason for wanting to abstract your renderer? Do you really need a D3D, Vulkan, OpenGL, etc... different renderers? What is the ability for having those different renderers going to get you? If it is to allow cross platform then just go OpenGL from the start.

FWIW, OpenGL gets you one platform with three OS's (two of which have no real market share for gaming) -- PC :wink: So it's not really cross platform at all :D

I'm not sure if I understand what you mean with the Virtual functions. Are you saying that the usage of some abstracted class, where through Virtual functions the inherited classes need to implement them, is technically unsafe/ unpredictable?

If so, this sounds strange, because I believe it's nothing different from any form of class inheritance with Virtual functions (and polymorphism).

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

I'm not sure if I understand what you mean with the Virtual functions. Are you saying that the usage of some abstracted class, where through Virtual functions the inherited classes need to implement them, is technically unsafe/ unpredictable?

If so, this sounds strange, because I believe it's nothing different from any form of class inheritance with Virtual functions (and polymorphism).

It's safe and predictable but a virtual function enforce the compiler to create a vtable and use this to get the right method pointer.

This is slower then a class without vtable and this is slower then a global function or function pointer.

As phantom mentioned this can get a problem if you call the slower virtual functions in a hot loop.

PC CPUs handle this problem by specialized hardware since a long time but on ARM(newer cores can do the same, like A52,A57), PowerPC(the only one left on the market is IBM with Cell2 so far I know) you can still encounter performance issues.

An other problem with vtables is cache polution but this is far out of the scope of this topic.

You want to build a renderer API and not waste time with the <1% performance increase optimization.

As phantom mentioned this can get a problem if you call the slower virtual functions in a hot loop.
PC CPUs handle this problem by specialized hardware since a long time but on ARM(newer cores can do the same, like A52,A57), PowerPC(the only one left on the market is IBM with Cell2 so far I know) you can still encounter performance issues.


Its not so much that x86/x64 CPUs have specialised hardware for this situation, more that out of order operations can cover a multitude of programmer sins; heavy v-table lookup and 'Typical C++Bullshit' (to borrow Mike Acton's phrase) wastes resources but the CPUs kind of cover for you (older ARM and PowerPC arch, such as those found in the last consoles cry in that situation) - but as hinted at it is a waste.

The basic problem is that runtime polymorphism is useful for when you don't know in advance what a type will be, just that they are related (you want to deal with 'animals' but also have specific 'cat', 'dog' and 'cow' types which can be treated the same).

When it comes to something like a renderer this is bullshit.

Lets take the classic 'D3DTexture' and 'OGLTexture' which inherit from an 'ITexture'.

You can write all your code in terms of ITexture which seems great, but at runtime because you can pass either one in compiler has to create code which does the virtual lookup to find out what type of texture you are using.

The problem is this is a waste; your application will never have a D3DTexture and an OpenGLTexture being used at the same time. So you have code which is wasting time routing functions to the correct place which during a run will never be a different type.

This is why some engines have either dropped this or are moving away from it.
Instead 'Texture' becomes your type and you have 'common' code and 'per api' code which are split in to separate implementation files which are selected between at compile time.

This works particularly well on consoles where your texture type is specialised for Xbox, PlayStation or other APIs. Same with phones; your android target only needs GL|ES backends.

It gets a bit more complicated in the PC world because OSes can deal with multiple APIs. At that point you either pick your per platform API and be done with it, or you split your backends in to either separate exes or separate dynamic libs and group like that.

So on Windows you might only have D3D11 and D3d12 (or even just have all 3).
Linux you have OpenGL and Vulkan.
OSX you have OpenGL and Metal.

In some cases you'll get overlap; Vulkan and OpenGL of example. In other case it will be self contained. (D3D would also semi-overlap with any Xbox code too so it's a bit of a Ven Diagram setup).

Then you ship the correct binaries for the correct platform; be it a separate per-api executable for the target platform or a single API with your rendering code bundled in to a specific dynamic library - with the correct separation of interfaces this should cause you no problem to spin up either; your renderer's primary interface is all the application sees, internally everything is just linked together like normal.


handle lib = LoadLibrary("mycoolD3D12backend.ll");
functionptr cri = loadFunction(lib, "createRendererInstance");
Renderer * renderer = cri(parameters);
// use renderer as normal
As everything should be compiled with the same compiler and settings on the target platform you won't have to worry about odd C++ calling problems or memory issues; the only reason for the C-function to create the interface is just to make life easier.
Why are you using virtual functions and abstract base classes? You aren’t going to compile both Metal and Direct3D 11 into the same executable and there will be no run-time instantiation of these types. You can-but-won’t compile Vulkan into the same executable since it would be PC-only—you would still have to build specific executables for Mac OS X/iOS and Linux without Direct 3D 11, so you would already have each API able to compile-out and it would be senseless to add extra complexity via special cases.
If you are only building for one API then you have even less of a reason.

The purpose of polymorphism is entirely run-time. You will never change the renderer type at run-time, and never not know which classes are actually being used, so your design is purely a violation of polymorphism. Stop it.


None of the functions in your example need to be virtual.

In every executable you build there will be one possible set of classes each using a common interface. So build your class hierarchy and structure under the assumption that there will only be one API for rendering.

Here are examples I have posted before:
	/**
	 * Class CIndexBufferBase
	 * \brief The base class for index buffers.
	 *
	 * Description: The base class for index buffers.
	 */
	class CIndexBufferBase {
	public :
		// == Functions.
		/**
		 * Resets the base members back to their defaults.
		 */
		void 								Reset();

		/**
		 * Gets the ID of this vertex buffer segment.
		 *
		 * \return Returns the ID of this vertex buffer segment.
		 */
		__inline uint32_t 						GetId() const;

		/**
		 * Gets the topology.
		 *
		 * \return Returns the topology.
		 */
		__inline uint32_t 						Topology() const;

		/**
		 * Gets the element size.
		 *
		 * \return Returns the size in bytes of the indices.
		 */
		__inline uint32_t 						IndexSize() const;

		/**
		 * Gets the total number of indices in the index buffer.
		 *
		 * \return Returns the total number of indices in the index buffer.
		 */
		__inline uint32_t 						TotalIndices() const;

		/**
		 * Gets the number of primitives.
		 *
		 * \return Returns the number of primitives.
		 */
		__inline uint32_t 						Primitives() const;

	protected :
		// == Various constructors.
										CIndexBufferBase();
										~CIndexBufferBase();


		// == Members.
		/** Size of each element in the buffer. */
		uint32_t							m_ui32ElementSize;

		/** Usage type. */
		uint32_t							m_ui32Usage;

		/** Number of indices. */
		uint32_t							m_ui32TotalIndices;

		/** The total number of primitives in the vertex buffer. */
		uint32_t							m_ui32PrimitiveCount;

		/** Topology. */
		uint32_t							m_ui32Topology;

		/** The index buffer ID. */
		uint32_t							m_ui32Id;
	};
	/**
	 * Class CDirectX11IndexBuffer
	 * \brief A Direct3D 11 index buffer.
	 *
	 * Description: A Direct3D 11 index buffer.
	 */
	class CDirectX11IndexBuffer : public CIndexBufferBase {
	public :
		// == Functions.


	protected :
		// == Various constructors.
										CDirectX11IndexBuffer();
										~CDirectX11IndexBuffer();


		// == Members.
		/** Our index buffer used with Direct3D 11. */
		ID3D11Buffer *							m_pbDirectX11IndexBuffer;

		/** The format of the index buffer. */
		DXGI_FORMAT							m_fFormat;


		// == Functions.
		/**
		 * Create an index buffer for the graphics API.  This should use the data in this object to create
		 *	the buffer.
		 *
		 * \return Returns false if there are any errors during the creation of the API index buffer.  This
		 *	always indicates a lack of resources.
		 */
		bool 								CreateApiIndexBuffer();

		/**
		 * Destroy the index buffer that is compliant with the current graphics API and all related
		 *	resources.
		 */
		void 								ResetApi();

		/**
		 * Render.  This is performed on the first vertex buffer set.  All other vertex buffers must use the same primitive type
		 *	and have the same number of elements active.  If not, the system will throw an error.
		 *
		 * \param _ui32StartVertex Index of the first vertex to load.  Beginning at _ui32StartVertex the correct number of vertices will be read out of the vertex buffer.
		 * \param _ui32TotalPrimitives The total number of primitives to render.
		 */
		void 								RenderApi( LSUINT32 _ui32StartVertex, LSUINT32 _ui32TotalPrimitives ) const;
	};
Same protected API for CDirectX9IndexBuffer, CMetalIndexBuffer, CVulkanIndexBuffer, COpenGlIndexBuffer, and COpenGlEsIndexBuffer.
Each includes API-specific types for that rendering API. ID3D11Buffer and DXGI_FORMAT shown here.

	/**
	 * Class CIndexBuffer
	 * \brief An index buffer.
	 *
	 * Description: An index buffer.
	 */
	class CIndexBuffer : public 
#ifdef LSG_DX9
		CDirectX9IndexBuffer
#elif defined( LSG_DX11 )
		CDirectX11IndexBuffer
#elif defined( LSG_OGL )
		COpenGlIndexBuffer
#elif defined( LSG_OGLES )
		COpenGlEsIndexBuffer
#elif defined( LSG_METAL )
		CMetalIndexBuffer
#elif defined( LSG_VKN )
		CVulkanIndexBuffer
#endif	// #ifdef LSG_DX9
	{
	public :
		// == Various constructors.
										CIndexBuffer();
										~CIndexBuffer();


		// == Functions.
		/**
		 * Creates an index buffer.
		 *
		 * \param _pvIndices Pointer to the actual index data.
		 * \param _ui32TotalIndices Total number of indices to which _pvIndices points.
		 * \param _ui32IndexSize Size, in bytes, of each index in the buffer.
		 * \param _bTakeOwnership If true, the index buffer keeps a copy of _pvIndices for itself.  If false,
		 *	the contents of _pvIndices are used directly by the index buffer.
		 * \param _ibutUsage One of the LSG_INDEX_BUFFER_USAGE_TYPES flags.
		 * \param _ptTopology The primitive topology.
		 * \return Returns true if the index buffer was created.  False indicates a memory/resource failure.
		 */
		 bool 								CreateIndexBuffer( const void * _pvIndices,
			uint32_t _ui32TotalIndices, uint32_t _ui32IndexSize,
			bool _bTakeOwnership,
			LSG_INDEX_BUFFER_USAGE_TYPES _ibutUsage,
			LSG_PRIMITIVE_TOPOLOGY _ptTopology );

		/**
		 * Resets the vertex buffer back to scratch.
		 */
		void 								Reset();


	protected :
		// == Members.


	private :
#ifdef LSG_DX9
		typedef CDirectX9IndexBuffer					Parent;
#elif defined( LSG_DX11 )
		typedef CDirectX11IndexBuffer					Parent;
#elif defined( LSG_METAL )
		typedef CMetalIndexBuffer					Parent;
#elif defined( LSG_OGL )
		typedef COpenGlIndexBuffer					Parent;
#elif defined( LSG_OGLES )
		typedef COpenGlEsIndexBuffer					Parent;
#elif defined( LSG_VKN )
		typedef CVulkanIndexBuffer					Parent;
#endif	// #ifdef LSG_DX9
	};

This is a simple hierarchy. The “magic” happens because all API-specific classes (CDirectX11IndexBuffer, CDirectX9IndexBuffer, CMetalIndexBuffer, CVulkanIndexBuffer, COpenGlIndexBuffer, and COpenGlEsIndexBuffer) are guaranteed to provide the same set of protected functions.
So, here is the implementation for CIndexBuffer::CreateIndexBuffer().

	 bool CIndexBuffer::CreateIndexBuffer( const void * _pvIndices,
		uint32_t _ui32TotalIndices, uint32_t _ui32IndexSize,
		bool _bTakeOwnership,
		LSG_INDEX_BUFFER_USAGE_TYPES _ibutUsage,
		LSG_PRIMITIVE_TOPOLOGY _ptTopology ) {
		Reset();

		m_ui32TotalIndices = _ui32TotalIndices;
		m_ui32ElementSize = _ui32IndexSize;
		m_ui32Usage = _ibutUsage;
		m_ui32PrimitiveCount = CVertexBuffer::PrimitiveCount( _ptTopology, _ui32TotalIndices );
		m_ui32Topology = _ptTopology;

		uint32_t ui32Size = m_ui32ElementSize * m_ui32TotalIndices;
		if ( !ui32Size ) { Reset(); return false; }
		
		// Other verification etc.


		if ( !CreateApiIndexBuffer() ) {
			Reset();
			return false;
		}

		return true;
	 }
CIndexBuffer can call CreateApiIndexBuffer() because it is guaranteed to be implemented in the API classes. Parameters are verified and copied in CIndexBuffer because it is the same across all API’s.

No run-time overhead caused by virtual functions. No duplication of data or logic, etc.


As was mentioned earlier, eliminate your macros, and don’t define custom values unless you need to. Again, translating your values into API-specific values is needless run-time overhead.

For Direct3D 11:
	/** Usage types. */
	enum LSG_INDEX_BUFFER_USAGE_TYPES {
		LSG_IBUT_SETONLY						= D3D11_USAGE_IMMUTABLE,						/**< Index data is never read or written to. */
		LSG_IBUT_STANDARD						= D3D11_USAGE_DEFAULT,							/**< Index data is not read or written to often by the CPU. */
		LSG_IBUT_DYNAMIC						= D3D11_USAGE_DYNAMIC,							/**< Index data is updated frequently. */

		LSG_IBUT_FORCE_DWORD						= 0x7FFFFFFF
	};
For OpenGL:
	/** Usage types. */
	enum LSG_INDEX_BUFFER_USAGE_TYPES {
		LSG_IBUT_SETONLY						= GL_STATIC_DRAW,							/**< Index data is never read or written to. */
		LSG_IBUT_STANDARD						= GL_STREAM_DRAW,							/**< Index data is not read or written to often by the CPU. */
		LSG_IBUT_DYNAMIC						= GL_DYNAMIC_DRAW,							/**< Index data is updated frequently. */

		LSG_IBUT_FORCE_DWORD						= 0x7FFFFFFF
	};
For Vulkan:
	/** Usage types. */
	enum LSG_INDEX_BUFFER_USAGE_TYPES {
		LSG_IBUT_SETONLY						= (1 << 0),								/**< Index data is never read or written to. */
		LSG_IBUT_STANDARD						= (1 << 1),								/**< Index data is not read or written to often by the CPU. */
		LSG_IBUT_DYNAMIC						= (1 << 2),								/**< Index data is updated frequently. */

		LSG_IBUT_FORCE_DWORD						= 0x7FFFFFFF
	};
Note that forcing the size of an enumeration via 0x7FFFFFFF is deprecated; you should be using C++11 features to do this but I have not updated my code.




Another example.
Direct3D 11:
	/** The blend types we support.  These are for blending to the backbuffer only. */
	enum LSG_BLEND_MODE {
		LSG_BM_ZERO							= D3D11_BLEND_ZERO,						/**< Blend factor is (0, 0, 0, 0). */
		LSG_BM_ONE							= D3D11_BLEND_ONE,						/**< Blend factor is (1, 1, 1, 1). */
		LSG_BM_SRC_COLOR						= D3D11_BLEND_SRC_COLOR,					/**< Blend factor is (Rs, Gs, Bs, As). */
		LSG_BM_INV_SRC_COLOR						= D3D11_BLEND_INV_SRC_COLOR,					/**< Blend factor is (1 - Rs, 1 - Gs, 1 - Bs, 1 - As). */
		LSG_BM_DST_COLOR						= D3D11_BLEND_DEST_COLOR,					/**< Blend factor is (Rd, Gd, Bd, Ad). */
		LSG_BM_INV_DST_COLOR						= D3D11_BLEND_INV_DEST_COLOR,					/**< Blend factor is (1 - Rd, 1 - Gd, 1 - Bd, 1 - Ad). */
		LSG_BM_SRC_ALPHA						= D3D11_BLEND_SRC_ALPHA,					/**< Blend factor is (As, As, As, As). */
		LSG_BM_INV_SRC_ALPHA						= D3D11_BLEND_INV_SRC_ALPHA,					/**< Blend factor is (1 - As, 1 - As, 1 - As, 1 - As). */
		LSG_BM_DST_ALPHA						= D3D11_BLEND_DEST_ALPHA,					/**< Blend factor is (Ad Ad Ad Ad). */
		LSG_BM_INV_DEST_ALPHA						= D3D11_BLEND_INV_DEST_ALPHA,					/**< Blend factor is (1 - Ad 1 - Ad 1 - Ad 1 - Ad). */
		LSG_BM_SRC_ALPHA_SAT						= D3D11_BLEND_SRC_ALPHA_SAT,					/**< Blend factor is (f, f, f, 1), where f = min(As, 1 - Ad). */

		LSG_BM_FORCE_DWORD						= 0x7FFFFFFF
	};
Vulkan:
	/** The blend types we support.  These are for blending to the backbuffer only. */
	enum LSG_BLEND_MODE {
		LSG_BM_ZERO							= VK_BLEND_FACTOR_ZERO,						/**< Blend factor is (0, 0, 0, 0). */
		LSG_BM_ONE							= VK_BLEND_FACTOR_ONE,						/**< Blend factor is (1, 1, 1, 1). */
		LSG_BM_SRC_COLOR						= VK_BLEND_FACTOR_SRC_COLOR,					/**< Blend factor is (Rs, Gs, Bs, As). */
		LSG_BM_INV_SRC_COLOR						= VK_BLEND_FACTOR_ONE_MINUS_SRC_COLOR,				/**< Blend factor is (1 - Rs, 1 - Gs, 1 - Bs, 1 - As). */
		LSG_BM_DST_COLOR						= VK_BLEND_FACTOR_DST_COLOR,					/**< Blend factor is (Rd, Gd, Bd, Ad). */
		LSG_BM_INV_DST_COLOR						= VK_BLEND_FACTOR_ONE_MINUS_DST_COLOR,				/**< Blend factor is (1 - Rd, 1 - Gd, 1 - Bd, 1 - Ad). */
		LSG_BM_SRC_ALPHA						= VK_BLEND_FACTOR_SRC_ALPHA,					/**< Blend factor is (As, As, As, As). */
		LSG_BM_INV_SRC_ALPHA						= VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA,				/**< Blend factor is (1 - As, 1 - As, 1 - As, 1 - As). */
		LSG_BM_DST_ALPHA						= VK_BLEND_FACTOR_DST_ALPHA,					/**< Blend factor is (Ad Ad Ad Ad). */
		LSG_BM_INV_DEST_ALPHA						= VK_BLEND_FACTOR_ONE_MINUS_DST_ALPHA,				/**< Blend factor is (1 - Ad 1 - Ad 1 - Ad 1 - Ad). */
		LSG_BM_SRC_ALPHA_SAT						= VK_BLEND_FACTOR_SRC_ALPHA_SATURATE,				/**< Blend factor is (f, f, f, 1), where f = min(As, 1 - Ad). */

		LSG_BM_FORCE_DWORD						= 0x7FFFFFFF
	};
Now you can plug LSG_BLEND_MODE values directly into an API function (may need a static_cast<>()) without run-time conversions unless necessary (Vulkan and Metal LSG_INDEX_BUFFER_USAGE_TYPES, for example).


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I think an important conclusion (also explicitly mentioned above), is that polymorphism, also in this case, is a runtime thing.
The potential minimal performance issue because of needing to 'choose', is not applicable after compliation/ in runtime.

@LSpiro: when it comes to 'why', I think that having abstracted base classes and inherited API specific classes, enables me to have the frontend code API independent. This comes with virtual functions, so the end-user/ frontend can call the API independent call, which will direct to the correct API's class's implementation (through the virtual functions). I think it's as simple as that.

Currently I'm only using D3D11, but I want to keep the door open for the future and it helps me in learning/ applying abstraction and polymorphism.
It feels like a lot more trouble/ waste to have completely independent/ different systems (per API) which have to be called differently from the frontend.

Even if I won't compile multiple API supported versions into one executable, I still want to have organized an reusable code.
So I can change the API in some parameter and then simply re-compile to the corresponding executable.

I have no experience with/ knowledge of common interfaces at this moment, but I assume that's what you use to somehow 'know' how a IConstantBuffer has to interact with a IDirect11ConstantBuffer. In your code sample for CreateBuffer, I don't understand how CreateAPIBuffer() would no what to do. That's why for now I'm using abstract classes and virtual functions.

Edit: I did some googling, it seems that there's no black and white definition/ distinction between a (common) interface and abstract classes. An interface had at least 1 pure Virtual function. Just an example: https://www.tutorialspoint.com/cplusplus/cpp_interfaces.htm

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

In your code sample for CreateBuffer, I don't understand how CreateAPIBuffer() would no what to do.

It’s implemented in a class literally called CDirectX11IndexBuffer in my example above.

Here is the full LSGDirectX11IndexBuffer.cpp file:
/**
 * Copyright L. Spiro 2014
 * All rights reserved.
 *
 * Written by:	Shawn (L. Spiro) Wilcoxen
 *
 * This code may not be sold or traded for any personal gain without express written consent.  You may use
 *	this code in your own projects and modify it to suit your needs as long as this disclaimer remains intact.
 *	You may not take credit for having written this code.
 *
 *
 * Description: A Direct3D 11 index buffer.
 */

#include "LSGDirectX11IndexBuffer.h"

#ifdef LSG_DX11

#include "../Gfx/LSGGfx.h"
#include "../VertexBuffer/LSGVertexBuffer.h"

namespace lsg {

	// == Various constructors.
	CDirectX11IndexBuffer::CDirectX11IndexBuffer() :
		m_pbDirectX11IndexBuffer( NULL ),
		m_fFormat( DXGI_FORMAT_UNKNOWN ) {
	}
	CDirectX11IndexBuffer::~CDirectX11IndexBuffer() {
		ResetApi();
	}

	// == Functions.
	/**
	 * Create an index buffer for the graphics API.  This should use the data in this object to create
	 *	the buffer.
	 *
	 * \return Returns false if there are any errors during the creation of the API index buffer.  This
	 *	always indicates a lack of resources.
	 */
	bool CDirectX11IndexBuffer::CreateApiIndexBuffer() {
		ResetApi();

		assert( m_ui32ElementSize == 2 || m_ui32ElementSize == 4 );

		D3D11_BUFFER_DESC bdDesc = { 0 };
		bdDesc.Usage = CStd::ImpCast( bdDesc.Usage, m_ui32Usage );
		bdDesc.ByteWidth = m_oobpBuffer.Length();
		bdDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
		bdDesc.CPUAccessFlags = 0;
		bdDesc.MiscFlags = 0;

		D3D11_SUBRESOURCE_DATA sdInitData;
		sdInitData.pSysMem = m_oobpBuffer.Buffer();
		sdInitData.SysMemPitch = 0;
		sdInitData.SysMemSlicePitch = 0;
		if ( FAILED( CDirectX11::GetDirectX11Device()->CreateBuffer( &bdDesc, &sdInitData, &m_pbDirectX11IndexBuffer ) ) ) { ResetApi(); return false; }

		m_fFormat = m_ui32ElementSize == 2 ? DXGI_FORMAT_R16_UINT : DXGI_FORMAT_R32_UINT;
		return true;
	}

	/**
	 * Destroy the index buffer that is compliant with the current graphics API and all related
	 *	resources.
	 */
	void CDirectX11IndexBuffer::ResetApi() {
		CDirectX11::SafeRelease( m_pbDirectX11IndexBuffer );
		m_fFormat = DXGI_FORMAT_UNKNOWN;
	}

	/**
	 * Render.  This is performed on the first vertex buffer set.  All other vertex buffers must use the same primitive type
	 *	and have the same number of elements active.  If not, the system will throw an error.
	 *
	 * \param _ui32StartVertex Index of the first vertex to load.  Beginning at _ui32StartVertex the correct number of vertices will be read out of the vertex buffer.
	 * \param _ui32TotalPrimitives The total number of primitives to render.
	 */
	void CDirectX11IndexBuffer::RenderApi( LSUINT32 _ui32StartVertex, LSUINT32 _ui32TotalPrimitives ) const {
		CDirectX11::GetDirectX11Context()->IASetPrimitiveTopology( static_cast<D3D11_PRIMITIVE_TOPOLOGY>(Topology()) );
		CDirectX11::GetDirectX11Context()->IASetIndexBuffer( m_pbDirectX11IndexBuffer, m_fFormat, 0 );
		CDirectX11::GetDirectX11Context()->DrawIndexed( CVertexBuffer::VertexCount( static_cast<LSG_PRIMITIVE_TOPOLOGY>(Topology()), _ui32TotalPrimitives ),
			0,
			_ui32StartVertex );
	}

}	// namespace lsg

#endif	// #ifdef LSG_DX11

Look at the macro LSG_DX11.
All files called LSGDirectX11* are wrapped between LSG_DX11 macros, which are defined in the project settings on a per-configuration basis.
If I want to change the graphics API I simply select a different configuration.
j9jk1qm.png
This enables a macro to specify the platform. You have to do this anyway.

Look at what I already posted in LSGIndexBuffer.h:
	class CIndexBuffer : public 
#ifdef LSG_DX9
		CDirectX9IndexBuffer
#elif defined( LSG_DX11 )
		CDirectX11IndexBuffer
#elif defined( LSG_OGL )
		COpenGlIndexBuffer
#elif defined( LSG_OGLES )
		COpenGlEsIndexBuffer
#elif defined( LSG_METAL )
		CMetalIndexBuffer
#elif defined( LSG_VKN )
		CVulkanIndexBuffer
#endif	// #ifdef LSG_DX9
	{
It’s right there. CIndexBuffer inherits from a different class based on which of these rendering API macros is defined.
Since LSG_DX11 is defined, CIndexBuffer inherits from CDirectX11IndexBuffer, which implements CDirectX11IndexBuffer::CreateApiIndexBuffer() as shown above.
If LSG_OGL is defined instead, it would inherit from COpenGlIndexBuffer, and this function instead:
LSGOpenGlIndexBuffer.cpp:
	bool COpenGlIndexBuffer::CreateApiIndexBuffer() {
		ResetApi();

		assert( m_ui32ElementSize == 1 || m_ui32ElementSize == 2 || m_ui32ElementSize == 4 );
		switch ( m_ui32ElementSize ) {
			case 1 : { m_eIndexType = GL_UNSIGNED_BYTE; break; }
			case 2 : { m_eIndexType = GL_UNSIGNED_SHORT; break; }
			case 4 : { m_eIndexType = GL_UNSIGNED_INT; break; }
		}

		// Try to use an index buffer object.
		// We need the error flag.
		CCriticalSection::CLocker lGetError( COpenGl::m_csCrit );
		glWarnErrorAlways( "Uncaught" );	// Clear error flag.

		COpenGl::glGenBuffers( 1, &m_uiVboId );
		// Errors are actually non-critical here, since we can fall back on using regular index arrays.
		if ( glWarnErrorAlways( "Failed to generate IBO" ) ) {
			if ( m_uiVboId ) {
				COpenGl::glDeleteBuffers( 1, &m_uiVboId );
				m_uiVboId = 0;
			}
			return true;
		}

		CCriticalSection::CLockerS lsLockBindBuffer( m_csCrit );
		COpenGl::BindIndexBuffer( m_uiVboId );

		// Success.  Upload.
		COpenGl::glBufferData( GL_ELEMENT_ARRAY_BUFFER,
			m_oobpBuffer.Length(),
			m_oobpBuffer.Buffer(),
			m_ui32Usage );
		COpenGl::BindIndexBuffer( 0 );
		if ( ::glGetError() != GL_NO_ERROR ) {
			// Again, fall back on vertex arrays.
			COpenGl::glDeleteBuffers( 1, &m_uiVboId );
			m_uiVboId = 0;
			return true;
		}

		return true;
	}
So there should not be any mysteries behind why one gets called and not the other, especially since it’s determined through inheritance, which is what you already plan to use anyway just by adding virtual to each of the API functions.
It’s a waste of run-time, it’s an abuse of polymorphism, and it is not clearer than what I have suggested. The primary difference is that you would have made CreateApiIndexBuffer(), ResetApi(), and RenderApi() virtual unnecessarily.

CreateApiIndexBuffer(), ResetApi(), and RenderApi() are the “interface” you want to implement that changes per-platform. The code organization here is the same, just faster and unnecessarily complex.
And there is no argument regarding having the compiler enforce implementations as it would via pure virtuals; if you have failed to implement one of the interface functions then compilation will fail.

You have all the same compile-based guarantees, all the same organization (implement interfaces on classes), but faster, cleaner, and correct…er.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Ah, now I get it.
You use the defines and a macro in the project settings do choose the API.
And depending on that macro you pick the needed classes/objects.
I'll think about that solution. Does that mean that in the CIndexBuffer you have to include the headers for all APi specific IndexBuffers? (to be able to create the needed object)

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Hey Spiro, how are you implementing your call to "CDirectX11::GetDirectX11Context()"

Is that a... singleton?

This topic is closed to new replies.

Advertisement