Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Irlan

Member Since 08 May 2012
Online Last Active Today, 10:01 PM

Topics I've Started

Octree Implementation

22 December 2014 - 10:50 PM

Hello everyone. Just sharing that I've just implemented my version of pre-allocated octree with insertion, update, and remove operations - here. I saw almost no articles here about Octrees (just 1 or 2), and the example is for those who want to learn a little bit of it. Make good use.
 
Cheers,
Irlan

Shadow Mapping Depth Buffer

18 November 2014 - 08:21 AM

I am trying to implement the Shadow Mapping Technique using DirectX 11.

The Light Depth Buffer is the first pass and the camera is the second. In the first pass I write only the depth information with this vertex shader:

cbuffer CB_PER_INSTANCE : register(b0) {
float4x4 mWorld;
float4x4 mCamView;
float4x4 mCamProj;
float4x4 mNormal;
float4x4 mLightView;
float4x4 mLightProj;
};

void main(in float3 _vLocalPos : POSITION0, out float4 _vOutPos : SV_POSITION) {
_vOutPos = mul(mWorld, float4(_vLocalPos, 1.0f));
_vOutPos = mul(mLightView, _vOutPos);
_vOutPos = mul(mLightProj, _vOutPos);
}

In the second pass, I render normally my scene using two shaders:

 

struct VS_IN {
float3 vLocalPos : POSITION0;
//There is other members here.
};


struct PER_INSTANCE_DATA {
float4x4 mWorld;
float4x4 mCamView;
float4x4 mCamProj;
float4x4 mNormal;
float4x4 mLightView;
float4x4 mLightProj;
};


cbuffer CB_PER_INSTANCE : register(cb0) {
PER_INSTANCE_DATA pidPerInstance;
};

struct VS_OUT {
float4 vViewPos : TEXCOORD0;
float3 vViewNormal : TEXCOORD1;
float2 vTexCoord : TEXCOORD2;
float4 vLightProjPos : TEXCOORD3;
};

void main(in VS_IN _viIn, out VS_OUT _voOut, out float4 _vOutPos : SV_POSITION) {
float4 vWorldPos = mul( pidPerInstance.mWorld, float4(_viIn.vLocalPos, 1.0f) );

//Camera Space.
_voOut.vViewPos = mul(pidPerInstance.mCamView, vWorldPos);
_vOutPos = mul(pidPerInstance.mCamProj, _voOut.vViewPos);

_voOut.vViewNormal = mul( (float3x3)(pidPerInstance.mNormal), _viIn.vLocalNormal );
_voOut.vTexCoord = _viIn.vLocalTexCoord;

//Light Space.
_voOut.vLightProjPos = mul(pidPerInstance.mLightView, vWorldPos);
_voOut.vLightProjPos = mul(pidPerInstance.mLightProj, _voOut.vLightProjPos);
}
struct VS_OUT {
float4 vViewPos : TEXCOORD0;
float3 vViewNormal : TEXCOORD1;
float2 vTexCoord : TEXCOORD2;
float4 vLightProjPos : TEXCOORD3;
};

Texture2D t2dDepthMap : register(t0);
SamplerState ssShadowSampler : register(s0) {
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};

void main(in VS_OUT _voOut, out float4 _vOutColor : SV_TARGET) {

float2 vLightProjTexCoord = _voOut.vLightProjPos.xy / _voOut.vLightProjPos.w;
vLightProjTexCoord.x = ( vLightProjTexCoord.x / 2.0f ) + 0.5f;
vLightProjTexCoord.y = ( vLightProjTexCoord.y / 2.0f ) - 0.5f;
float fThisDepthFromLight = _voOut.vLightProjPos.z / _voOut.vLightProjPos.w;

float fShadowMapPixelDepth = t2dDepthMap.Sample(ssShadowSampler, vLightProjTexCoord).r;

if ( fThisDepthFromLight > fShadowMapPixelDepth ) {
_vOutColor = float4(0.0f, 0.0f, 0.0f, 1.0f);
}
else {
_vOutColor = float4(1.0f, 0.0f, 0.0f, 1.0f);
}
}

I use Intel GFA to debug one piece of frame. Here's the results:

Camera Z-Buffer:

Actually this doesn't matter since the technique uses only Light Projection for computations.

 

camz.jpg

 

Light Z-Buffer:

 

lightz.jpg

 

What I got:

 

camcolors.jpg

 

 

 

The Light is at (0.0, 0.0, 0.0) looking at (0.0, 0.0, 1.0) and uses a Perspective Projection. The last one was just an example of the final colors, but when I navigate to another point of view nothing happens, all I got is black colors.

I'm using the Debug Interface also, and everything is correct.


Win32 Window Creation

27 October 2014 - 12:30 PM

I'm creating an Win32 Window to serve as the primary surface to make DirectX and OpenGL calls. The problem is that is more intuitive to create the window first, initialize the API and load graphics resources (Shaders, Textures, Vertex Streams), and after that start with the simulation loop, but I don't know that if there is some internal problem - maybe with the Windows platform - in this create-load-show interval. Another thing is that I'm using the main thread to bufferize keyboard events. Eg.:

/*
* Initialize and Win32 Window to be showed later.
*/

BOOL CWindow::Init(const std::wstring& _sTitle, unsigned int _ui32Width, unsigned int _ui32Height, bool /*_bFullscreen*/) {
	WNDCLASSEX wcex;
	wcex.cbSize = sizeof(WNDCLASSEX);
	wcex.style = CS_OWNDC;
	wcex.lpfnWndProc = CWindow::WndProc;
	wcex.cbClsExtra = 0;
	wcex.cbWndExtra = 0;
	wcex.hInstance = ::GetModuleHandle(NULL);
	wcex.hIcon = NULL;
	wcex.hCursor = ::LoadCursor(NULL, IDC_ARROW);
	wcex.hbrBackground = reinterpret_cast<HBRUSH>(COLOR_WINDOW + 1);
	wcex.lpszMenuName = NULL;
	wcex.lpszClassName = _sTitle.c_str();
	wcex.hIconSm = ::LoadIcon(NULL, IDI_APPLICATION);

	if ( !::RegisterClassEx(&wcex) ) { return false; }

	m_hWnd = ::CreateWindowEx(WS_EX_OVERLAPPEDWINDOW, wcex.lpszClassName, wcex.lpszClassName,
		WS_OVERLAPPEDWINDOW | WS_CLIPCHILDREN | WS_CLIPSIBLINGS, 
		CW_USEDEFAULT, CW_USEDEFAULT, _ui32Width, _ui32Height, 
		NULL, NULL, wcex.hInstance, reinterpret_cast<LPVOID>(this) );

	return m_hWnd ? true : false;
}

Looks more intuitive than the create-show-load approach, but I don't know if I'm going to lose window events in the interval of the above implementation so after the initialization I can call:

 

int CWindow::Run() {
::SetThreadPriority(::GetCurrentThread(), THREAD_PRIORITY_HIGHEST);


MSG msg = { 0 };
while (m_bIsOpen) {
::WaitMessage();
while (::PeekMessage(&msg, m_hWnd, 0, 0, PM_REMOVE)) {
::DispatchMessage(&msg);
}
}
return static_cast<int>(msg.wParam);
}

 

At the highest level:

m_wWindow.Init(...);
CGraphics::InitApi( m_wWindow.Hwnd() );
m_wWindow.Show(...);
m_wWindow.Run();

Problem with DirectX 11 Vertex Layout

30 September 2014 - 02:49 PM

I'm having some trouble while seen the correct result of my computations. I'll post my current setup.

I'll ommit the others parts of the code like matrices, device creation because I've already checked and its 100% working.

I'm loading models from .fbx and I tried to convert the order of drawing to CCW and CW. Also, I tried to invert the Z coordinate of the normals, positions, and the UV's Y coordinate like 1.0f - UV.y.

For the vertex buffers I'm using one per attribute. In my case 5 vertex buffers one for each attribute.

I have the DirectX 11's Debug Layer activated (no warnings/errors) and I don't have PIX, Render Doc working and I don't have money to buy a version of Visual Studio with integrated  Graphics Debugger.

 

Vertex Shader:

cbuffer Model : register(b0) {
	float4x4 mWorldView;
	float4x4 mProjection;
	float4x4 mNormal;
};

struct VS_INPUT {
	float4 vPos : POSITION0;
	float2 vTexCoord : TEXCOORD0;
	float3 vNormal : NORMAL0;
	float3 vBinormal : NORMAL1;
	float3 vTangent : TANGENT0;
};

struct VS_OUTPUT {
	float4 vPos : SV_POSITION;
	float4 vViewSpacePos : POSITION0;
	float2 vTexCoord : TEXCOORD0;
	float3 vNormal : NORMAL0;
	float3 vToLight : NORMAL1;
	float3 vToViewer : NORMAL2;
};

void main( in VS_INPUT _viIn, out VS_OUTPUT _voOut ) {
	_voOut.vViewSpacePos = mul( mWorldView, _viIn.vPos );
	_voOut.vTexCoord = _viIn.vTexCoord;
	_voOut.vNormal = _viIn.vNormal;
	_voOut.vToLight = float3(0.0, 10.0, 0.0) - _voOut.vViewSpacePos.xyz;
	_voOut.vToViewer = -_voOut.vViewSpacePos.xyz;
	_voOut.vPos = mul( mProjection, _voOut.vViewSpacePos );
}

Vertex Layout:

D3D11_INPUT_ELEMENT_DESC CDirectX11Shader::m_pcDefaultVertexDesc[VA_MAX] = {
		{ "POSITION", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 1, 16, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 2, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "NORMAL", 1, DXGI_FORMAT_R32G32B32_FLOAT, 3, 40, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 4, 56, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

Here's my .fbx simple loading (wich I'll convert to my own file format later).

std::vector<glm::vec4> vPos(iFaceVertexCount);
	std::vector<glm::vec2> vTexCoord(iFaceVertexCount);
	std::vector<glm::vec3> vNormal(iFaceVertexCount);
	std::vector<glm::vec3> vBinormal(iFaceVertexCount);
	std::vector<glm::vec3> vTangent(iFaceVertexCount);
	std::vector<unsigned int> vIndex(iFaceCount * 3);

	CMesh* pMesh = new CMesh();
	for (int iFace = 0; iFace < iFaceCount; ++iFace) {
		unsigned int ui32RenderPart = static_cast<unsigned int>( vMaterialIndex.GetAt(iFace) );
		if ( (ui32RenderPart + 1) > pMesh->m_vRenderPart.size() ) {
			pMesh->m_vRenderPart.resize(ui32RenderPart + 1);
			CMaterialManager::CreateMaterial(pfbxnMeshNode->GetMaterial(ui32RenderPart), pMesh->m_vRenderPart[ui32RenderPart].miMaterial);
		}
		++pMesh->m_vRenderPart[ui32RenderPart].ui32FaceCount;
	}

	unsigned int ui32LastFaceStart = 0;
	for (size_t ui32RenderPart = 0; ui32RenderPart < pMesh->m_vRenderPart.size(); ++ui32RenderPart) {
		pMesh->m_vRenderPart[ui32RenderPart].ui32FaceStart = ui32LastFaceStart;
		ui32LastFaceStart += pMesh->m_vRenderPart[ui32RenderPart].ui32FaceCount;
		pMesh->m_vRenderPart[ui32RenderPart].ui32FaceCount = 0;
	}
	
	int iVertex = 0;
	for (int iFace = 0; iFace < iFaceCount; ++iFace) {
		int iFaceSize = _pfbxmMesh->GetPolygonSize(iFace);
		int iRenderPart = vMaterialIndex.GetAt(iFace);

		unsigned int ui32IndexStart = pMesh->m_vRenderPart[iRenderPart].ui32FaceStart * iFaceSize; 
		unsigned int ui32IndexCount = pMesh->m_vRenderPart[iRenderPart].ui32FaceCount * iFaceSize;
		unsigned int ui32IndexOffset = ui32IndexStart + ui32IndexCount;

		for (int iFaceVertex = 0; iFaceVertex < iFaceSize; ++iFaceVertex) {
			int iControlPoint = _pfbxmMesh->GetPolygonVertex(iFace, iFaceVertex);

			FbxVector4 vFbxPos = _pfbxmMesh->GetControlPointAt(iControlPoint);
			vPos[iVertex].x = static_cast<float>(vFbxPos[0]);
			vPos[iVertex].y = static_cast<float>(vFbxPos[1]);
			vPos[iVertex].z = static_cast<float>(vFbxPos[2]);
			vPos[iVertex].w = 1.0f;

			if ( lUVNames.GetCount() ) {
				FbxVector2 vFbxUv;
				bool bUnmappedUv;
				_pfbxmMesh->GetPolygonVertexUV(iFace, iFaceVertex, lUVNames[0], vFbxUv, bUnmappedUv);
				vTexCoord[iVertex].x = static_cast<float>(vFbxUv[0]);
				vTexCoord[iVertex].y = static_cast<float>(1.0f - vFbxUv[1]);
			}

			FbxVector4 vFbxNormal;
			_pfbxmMesh->GetPolygonVertexNormal(iFace, iFaceVertex, vFbxNormal);
			vNormal[iVertex].x = static_cast<float>(vFbxNormal[0]);
			vNormal[iVertex].y = static_cast<float>(vFbxNormal[1]);
			vNormal[iVertex].z = static_cast<float>(vFbxNormal[2]);
			
			/*
			FbxVector4 vFbxBinormal;
			_pfbxmMesh->GetPolygonVertex(iFace, iFaceVertex, vFbxNormal);
			vNormal[iVertex].x = static_cast<float>(vFbxNormal[0]);
			vNormal[iVertex].y = static_cast<float>(vFbxNormal[1]);
			vNormal[iVertex].z = static_cast<float>(vFbxNormal[2]);
			*/
			
			/*
			FbxVector4 vFbxTangent;
			_pfbxmMesh->GetPolygonVertex(iFace, iFaceVertex, vFbxNormal);
			vNormal[iVertex].x = static_cast<float>(vFbxNormal[0]);
			vNormal[iVertex].y = static_cast<float>(vFbxNormal[1]);
			vNormal[iVertex].z = static_cast<float>(vFbxNormal[2]);
			*/

			vIndex[ui32IndexOffset + iFaceVertex] = iVertex;

			++iVertex;
		}
		
		//0 1 2 -> 2 1 0 In the case I want to convert to CW.
		//unsigned int ui32TmpIndex = vIndex[ui32IndexOffset + 0];
		//vIndex[ui32IndexOffset + 0] = vIndex[ui32IndexOffset + 2];
		//vIndex[ui32IndexOffset + 2] = ui32TmpIndex;

		++pMesh->m_vRenderPart[iRenderPart].ui32FaceCount;
	}

	pMesh->m_vvbVertexBuffer.resize(5);
	
	pMesh->m_vvbVertexBuffer[0].CreateApi( sizeof(glm::vec4) * vPos.size(), &vPos[0], BU_STATIC);
	pMesh->m_vvbVertexBuffer[0].m_uiStrides = sizeof(glm::vec4);
	pMesh->m_vvbVertexBuffer[0].m_uiStartSlot = 0;

	pMesh->m_vvbVertexBuffer[1].CreateApi(sizeof(glm::vec2) * vTexCoord.size(), &vTexCoord[0], BU_STATIC);
	pMesh->m_vvbVertexBuffer[1].m_uiStrides = sizeof(glm::vec2);
	pMesh->m_vvbVertexBuffer[1].m_uiStartSlot = 1;

	pMesh->m_vvbVertexBuffer[2].CreateApi( sizeof(glm::vec3) * vNormal.size(), &vNormal[0], BU_STATIC);
	pMesh->m_vvbVertexBuffer[2].m_uiStrides = sizeof(glm::vec3);
	pMesh->m_vvbVertexBuffer[2].m_uiStartSlot = 2;
	
	pMesh->m_vvbVertexBuffer[3].CreateApi(sizeof(glm::vec3) * vBinormal.size(), &vBinormal[0], BU_STATIC);
	pMesh->m_vvbVertexBuffer[3].m_uiStrides = sizeof(glm::vec3);
	pMesh->m_vvbVertexBuffer[3].m_uiStartSlot = 3;
	
	pMesh->m_vvbVertexBuffer[4].CreateApi(sizeof(glm::vec3) * vTangent.size(), &vTangent[0], BU_STATIC);
	pMesh->m_vvbVertexBuffer[4].m_uiStrides = sizeof(glm::vec3);
	pMesh->m_vvbVertexBuffer[4].m_uiStartSlot = 4;

	pMesh->m_ibIndexBuffer.CreateApi( sizeof(unsigned int) * vIndex.size(), reinterpret_cast<const void*>(&vIndex[0]), BU_STATIC);

Here's my drawing calls:

 

Ex:

	
//For each attribute...
CDirectX11::GetDeviceContext()->IASetVertexBuffers(m_uiStartSlot, 1, &m_pbBuffer, &m_uiStrides, &m_uiOffsets);

Later:


	CDirectX11::GetDeviceContext()->IASetIndexBuffer(m_pd3dBuffer, DXGI_FORMAT_R32_UINT, 0);
	CDirectX11::GetDeviceContext()->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
	CDirectX11::GetDeviceContext()->DrawIndexed(_ui32IndexCount, _ui32StartIndex, 0);

Results:

 

results.png


Question about game events

18 June 2014 - 05:43 AM

I want to know if game events should be generated by the Game itself or, for instance, the game world can generate game events based on things that happen inside the world?

 

So my world can have a EventListener attached to it?

 

 

Eg.

 

BeginContact->GenerateGameEvent->DispatchEventViaEventListener

 

?


PARTNERS