Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Neilo

Member Since 01 Dec 2009
Offline Last Active Jul 17 2013 09:00 AM

Topics I've Started

C++ : 'is a' vs 'has a' when extending simple data types

16 June 2010 - 11:43 PM

While researching other people's D3D frameworks and engines, I've noticed that when people encapsulate common D3D structures like D3D11_BUFFER_DESC there is a 50/50 split between two approaches.

The first one is what I do:

struct BufferDesc : public D3D11_BUFFER_DESC
{
BufferDesc()
{
this->Usage = D3D11_USAGE_DEFAULT;
this->CPUAccessFlags = 0;
this->MiscFlags = 0;
this->StructureByteStride = 0;
};
};


And the second is I suppose a more traditional encapsulation approach

class AnotherBufferDesc
{
AnotherBufferDesc()
{
desc.Usage = D3D11_USAGE_DEFAULT;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
desc.StructureByteStride = 0;
}
// Accessor/Mutators
protected:
D3D11_BUFFER_DESC desc;
friend class MyRenderer;
};


I guess I pick the first method because structures like these are glorified parameter blocks for creating actual useful objects. That said, I often collect objects like Render Targets and Shader Resources in structs too because accessor/mutators and protected members seem pointless there too. There's no logic to encapsulate...

So, I guess I'm wondering is my approach bad practice? I picked it up from abstracting Win32 structures for my day job.

The only reason I could think to use approach 2 is if you wanted to completely a encapsulate a thing with a view to supporting dirrerent APIs.

Exception from d3d11 dll on app exit

18 May 2010 - 02:42 AM

I am having a problem where an exception will get thrown from D3D11.dll once the last D3D11 object is released in my application... But only on my home PC! My machine in work and my laptop don't exhibit the same issue. My desktop and work PC both use ATI videocards and the Feb 2010 SDK. I actually don't know where to start with this, beyond setting up the D3D debug symbols so I can identify what function is causing the issue and perhaps reinstalling the SDK - but why would is stop working in the first place? Below is what I think is the relevant code. Excuse the somewhat crude HRESULT checking, this framework is just a quick and dirty means for me to relearn graphics stuff in terms of D3D11 and HLSL. It's based almost exactly on the D3D10 SDK Tutorial 04. All advice welcome! Renderer creation and destruction
Renderer::Renderer() : _pDevice(NULL), _pImmediateContext(NULL), _featureLevel(D3D_FEATURE_LEVEL_9_1), _pDXGIFactory(NULL), _pDXGIDevice(NULL) { Create(); }
Renderer::~Renderer() { Destroy(); }

void Renderer::Create() 
{
	HRESULT hr = D3D11CreateDevice(
		NULL,
		D3D_DRIVER_TYPE_HARDWARE, 
		NULL, 
		D3D11_CREATE_DEVICE_DEBUG,	/* create flags */
		NULL,						/* feature levels */ 
		0, 
		D3D11_SDK_VERSION, 
		&_pDevice,
		&_featureLevel, 
		&_pImmediateContext);
	
	if(SUCCEEDED(hr)) hr = _pDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&_pDXGIDevice));

	IDXGIAdapter* pDXGIAdapter = NULL;
	if(SUCCEEDED(hr)) hr =  _pDXGIDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void**>(&pDXGIAdapter));

	if(SUCCEEDED(hr)) hr = pDXGIAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void**>(&_pDXGIFactory));
	//if(SUCCEEDED(hr)) hr = CreateDXGIFactory(__uuidof(IDXGIFactory), reinterpret_cast<void**>(&_pDXGIFactory));

	SAFE_RELEASE(pDXGIAdapter);

	if(hr != S_OK) 
	{
		Destroy();
		throw RenderException(hr);
	}
}

void Renderer::Destroy() 
{ 
	SAFE_RELEASE(_pDevice); 
	SAFE_RELEASE(_pImmediateContext); 
	SAFE_RELEASE(_pDXGIDevice);
	SAFE_RELEASE(_pDXGIFactory);
}
App startup / shutdown
TestApp::TestApp(HINSTANCE hInstance) : 
	Application(hInstance), 
	_pRenderer(NULL), 
	_pWindow(NULL), 
	_pInputLayout(NULL), 
	_pVertexShader(NULL), 
	_pPixelShader(NULL), 
	_pVertexBuffer(NULL),
	_pIndexBuffer(NULL),
	_pVSConstantBuffer(NULL)
{
	Startup();
}

TestApp::~TestApp()
{
	Shutdown();
}

void TestApp::Startup()
{
	URHERE();

	_pRenderer = new Renderer();
	_pWindow = _pRenderer->CreateRenderWindow(_hInstance, L"Test Window", 800, 600, true);

	CreateD3DResources();
	
	_pWindow->Show();

}

void TestApp::Shutdown()
{
	SAFE_DELETE(_pWindow);
	SAFE_DELETE(_pRenderer);
	SAFE_RELEASE(_pInputLayout);
	SAFE_RELEASE(_pVertexShader);
	SAFE_RELEASE(_pPixelShader);
	SAFE_RELEASE(_pVertexBuffer);
	SAFE_RELEASE(_pIndexBuffer);
	SAFE_RELEASE(_pVSConstantBuffer);
}
The CreateD3DResources function
	// Set up the D3D stuff I need for this demo!
	HRESULT hr = S_OK;

	ID3D11Device* pDevice = _pRenderer->GetDevice();
	ID3D11DeviceContext* pImmediate = _pRenderer->GetImmediateContext();

	// Shaders
	ID3D10Blob* pShaderBlob = NULL;

	// Vertex Shader
	if(SUCCEEDED(hr)) hr = D3DX11CompileFromFile(L"..\\WindowTest\\SimpleShader.vsh", NULL, NULL, "VS", "vs_4_0", D3D10_SHADER_DEBUG, NULL, NULL, &pShaderBlob, NULL, NULL);
	if(SUCCEEDED(hr)) hr = pDevice->CreateVertexShader(pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), NULL, &_pVertexShader);

	// Input Layout
	D3D11_INPUT_ELEMENT_DESC layout[] =
	{
		{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "COLOR",  0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }
	};
	UINT numElements = sizeof(layout) / sizeof(layout[0]);

	if(SUCCEEDED(hr)) hr = pDevice->CreateInputLayout(layout, numElements, pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), &_pInputLayout);

	SAFE_RELEASE(pShaderBlob);

	// Pixel Shader
	if(SUCCEEDED(hr)) hr = D3DX11CompileFromFile(L"..\\WindowTest\\SimpleShader.psh", NULL, NULL, "PS", "ps_4_0", D3D10_SHADER_DEBUG, NULL, NULL, &pShaderBlob, NULL, NULL);
	if(SUCCEEDED(hr)) hr = pDevice->CreatePixelShader(pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), NULL, &_pPixelShader);

	SAFE_RELEASE(pShaderBlob);

	// Vertex Buffer
	MyVertex vertices[] =
    {
		{ { -1.0f, 1.0f, -1.0f }, { 0.0f, 0.0f, 1.0f, 1.0f } },
		{ { 1.0f, 1.0f, -1.0f }, { 0.0f, 1.0f, 0.0f, 1.0f } },
		{ { 1.0f, 1.0f, 1.0f }, { 0.0f, 1.0f, 1.0f, 1.0f } },
		{ { -1.0f, 1.0f, 1.0f }, { 1.0f, 0.0f, 0.0f, 1.0f } },
		{ { -1.0f, -1.0f, -1.0f }, { 1.0f, 0.0f, 1.0f, 1.0f } },
		{ { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, 0.0f, 1.0f } },
		{ { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f, 1.0f } },
		{ { -1.0f, -1.0f, 1.0f }, { 0.0f, 0.0f, 0.0f, 1.0f } },
    };

	D3D11_BUFFER_DESC vertexBufferDesc;
	ZeroMemory(&vertexBufferDesc, sizeof(D3D11_BUFFER_DESC));
	vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
	vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
	vertexBufferDesc.ByteWidth = sizeof(vertices); //sizeof(MyVertex) * 8;

	D3D11_SUBRESOURCE_DATA initialData;
	ZeroMemory(&initialData, sizeof(D3D11_SUBRESOURCE_DATA));
	initialData.pSysMem = static_cast<void*>(vertices);

	if(SUCCEEDED(hr)) hr = pDevice->CreateBuffer(&vertexBufferDesc, &initialData, &_pVertexBuffer);

	// Index Buffer
    DWORD indices[] =
    {
        3,1,0,
        2,1,3,

        0,5,4,
        1,5,0,

        3,4,7,
        0,4,3,

        1,6,5,
        2,6,1,

        2,7,6,
        3,7,2,

        6,4,5,
        7,4,6,
    };

	D3D11_BUFFER_DESC indexBufferDesc;
	ZeroMemory(&indexBufferDesc, sizeof(D3D11_BUFFER_DESC));
	indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
    indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
	indexBufferDesc.ByteWidth = sizeof(indices); //sizeof(DWORD) * 36;

	ZeroMemory(&initialData, sizeof(D3D11_SUBRESOURCE_DATA));
	initialData.pSysMem = static_cast<void*>(indices);

	if(SUCCEEDED(hr)) hr = pDevice->CreateBuffer(&indexBufferDesc, &initialData, &_pIndexBuffer);

	// From SDK Docs Tut 4
	// Initialize the world matrix
    //D3DXMatrixIdentity( &_World );
	_World =/* XMMatrixIdentity();*/ XMMatrixMultiply(XMMatrixIdentity(), XMMatrixRotationY(XM_PI * 0.667));

    // Initialize the view matrix
	XMVECTOR eye = { 0.0f, 1.0f, -5.0f, 1.0f };
	XMVECTOR at = { 0.0f, 1.0f, 0.0f, 1.0f};
	XMVECTOR up = { 0.0f, 1.0f, 0.0f, 1.0f };
	_View = XMMatrixLookAtLH(eye, at, up);

    // Initialize the projection matrix
	_Projection = XMMatrixPerspectiveFovLH((FLOAT)XM_PI * 0.5f, (FLOAT) 800 / (FLOAT) 600, 0.1f, 100.0f);

	// Shader constant buffer
	D3D11_BUFFER_DESC constantBufferDesc;
	ZeroMemory(&constantBufferDesc, sizeof(D3D11_BUFFER_DESC));
	constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
	constantBufferDesc.Usage = D3D11_USAGE_DYNAMIC;
	constantBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
	constantBufferDesc.ByteWidth = sizeof(VSShaderConstants);

	//XMMATRIX WorldView = XMMatrixMultiply(this->_World, this->_View);
	//VSShaderConstants shaderConstants;
	//shaderConstants.WorldViewProjection = XMMatrixMultiply(WorldView, _Projection);

	//D3D11_SUBRESOURCE_DATA subResource;
	//subResource.pSysMem = static_cast<void*>(&shaderConstants);
	//subResource.SysMemPitch = 0;
	//subResource.SysMemSlicePitch = 0;

	if(SUCCEEDED(hr)) hr = pDevice->CreateBuffer(&constantBufferDesc, NULL/*&subResource*/, &_pVSConstantBuffer);

	D3D11_MAPPED_SUBRESOURCE cBuffer;
	VSShaderConstants* pShaderConstants;

	hr = pImmediate->Map(_pVSConstantBuffer, 0, ::D3D11_MAP_WRITE_DISCARD, 0, &cBuffer);
	pShaderConstants = reinterpret_cast<VSShaderConstants*>(cBuffer.pData);
	pShaderConstants->WorldViewProjection = _World * _View * _Projection;
	pImmediate->Unmap(_pVSConstantBuffer, 0);

	if(hr != S_OK) TRACEF(L"Problem creating D3D Resources! %s, %s\n", __WFUNCTION__, DXGetErrorDescription(hr));

	SAFE_RELEASE(pDevice);
	SAFE_RELEASE(pImmediate);
}
My render function
void TestApp::Update()
{
	// get input
	// update scene
	// draw scene
	ID3D11DeviceContext* pImmediate = _pRenderer->GetImmediateContext(); // This adds to the refcount of the Device Context it returns

	_pWindow->Clear();

	pImmediate->IASetInputLayout(_pInputLayout);
	ID3D11Buffer* vertexBuffers[1];
	vertexBuffers[0] = _pVertexBuffer;
	UINT stride = sizeof(MyVertex);
	UINT offset = 0;
	pImmediate->IASetVertexBuffers(0, 1, vertexBuffers, &stride, &offset);
	pImmediate->IASetIndexBuffer(_pIndexBuffer, DXGI_FORMAT_R32_UINT, 0);
	pImmediate->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

	pImmediate->VSSetShader(_pVertexShader,NULL, 0);
	pImmediate->VSSetConstantBuffers(0, 1, &_pVSConstantBuffer);
	pImmediate->PSSetShader(_pPixelShader, NULL, 0);

	pImmediate->DrawIndexed(36, 0, 0);
	SAFE_RELEASE(pImmediate);

	_pWindow->SwapBuffers();
}

Composing interfaces using other interfaces in C++

29 March 2010 - 09:31 PM

I'm toying with a multiapi and maybe multiplatform rendering system abstraction and just have a C++ related query about interfaces. If I have the following abstract base classes:
class ISwapChain
{
public:
	virtual ~ISwapChain() {}
	virtual void Present() = 0;
};

class IRenderTarget
{
public:
	virtual ~IRenderTarget() {}
	virtual void SomeRenderTargetRelatedMethod() = 0;
};

class IRenderWindow : public ISwapChain, public IRenderTarget
{
public:
	virtual ~IRenderWindow() {}
	virtual void Resize(/* ... */);
	/* ... */
};
So I declare a D3D specific render window like this...
class D3DRenderWindow : public MyWin32Lib::Window /* Win32 specific window wrapper */, IRenderWindow
{
public:
	D3DRenderWindow() {}
	virtual ~D3DRenderWindow() {}
	
	// ISwapChain stuff
	void Present() {}
	
	// IRenderTarger stuff
	void Resize() {}
};
It all compiles and works but I've never really needed to composite "interfaces" together like this in C++ and I was wondering if it's a bad idea. The reason I want to do the above is so I can do something like this:
IRenderer* pRenderer = new Renderer(/* params including D3D, OGL, Software enum */);
IRenderWindow* = pRenderer->CreateWindow(/* some params */);
Note: I know the performance hit of using runtime polymorphism. This is just an experiment. My main framework uses templates to achieve a similar effect at compile time

D3D11 SwapChain confusion

11 March 2010 - 07:09 AM

I am trying to use D3D11CreateDevice and IDXGIFactory::CreateSwapChain instead of the usual D3D11CreateDeviceAndSwapChain for a few reasons. The first is I want to keep my rendering abstraction unaware of whether a render target is just a texture, or in fact a window. The second reason is that I want to be able to render to multiple windows, each of which require a swap chain. The problem is, from my RenderWindow class if I create am IDXGIFactory and try to create a swapchain with it, even when I query the IDXGIDevice from the ID3D11Device, I get the following warning in the output window: DXGI Warning: IDXGIFactory::CreateSwapChain: This function is being called with a device from a different IDXGIFactory. What I have done as an alternative is something like this:
IDXGIDevice* pDXGIDevice = NULL;
if(SUCCEEDED(hr)) hr = pD3D11Device->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&pDXGIDevice));

IDXGIAdapter* pDXGIAdapter = NULL;
if(SUCCEEDED(hr)) hr = pDXGIDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void**>(&pDXGIAdapter));

IDXGIFactory* pDXGIFactory = NULL;
if(SUCCEEDED(hr)) hr = pDXGIAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void**>(&pDXGIFactory));
Which seems to work, but I seem to be jumping through a lot of hoops to get to where I want. Is there an alternative or should I just hide that code away in a function somewhere and move on?

PARTNERS