Jump to content

  • Log In with Google      Sign In   
  • Create Account


theScore

Member Since 18 May 2008
Offline Last Active Aug 07 2014 10:51 AM

Topics I've Started

Big memory problem (executable size growing very fastly)

05 August 2014 - 03:38 AM

Hi !

I am doing deferred rendering with Direct3D 11, and it works well... Except that each second during the execution, the executable becomes about 25 MBytes bigger each second... ( about 300 fps)

 

My rendering loop looks like this :

	while (exitThread == false)
	{

		GBuffer();

		lightingPass();

		deferredContext->FinishCommandList(TRUE, &commandList);
		deviceContext->ExecuteCommandList(commandList, FALSE);
		
		swapChain->Present(0,0);
	}

In the GBuffer, I am rendering to 3 textures : colors, normals and positions (8 Bytes for each pixel)

What could be according to reasons which could make increasing executable size during the execution ?


3D real time project collaboration

29 July 2014 - 04:31 PM

Hi !
I am programmer and I am looking for a person who could collaborate with me [deleted by moderator]

Lighting problem in deferred rendering

18 July 2014 - 05:53 AM

Hi !

I have a lighting problem, the scene is lit and a part or it is unlit, I don't understand why. If I amplify light intensity for example, lit parts of the scene are more lit but the part which is in shadow, is better lighted. There is no shadow mapping implemented in this program, and I even not manage to guess what kind of problem it could be.

This is deferred rendering, so in the image below, the lighting is applied on a texture pixel by pixel, which is applied on a quad.

It doesn't seem to be the number of lights (just 1 present in the scene), I tried with more lights and it is the same problem.

Somebody whould have an idea to understand it ? If you have more questions about this, don't hesitate to ask me !

 

Attached File  bugged DX11.png   539.5KB   3 downloads


Memory alignment problem (CPU and GPU)

13 July 2014 - 12:22 PM

Hi !

I work with DX11 and memory alignment is not correct (i am under windows 64 bit), this is what I have from CPU's program (variables declarations):

__declspec(align(16))
struct VertexInfo
{
	XMFLOAT4A positions;
	XMFLOAT4A normals ;
	XMFLOAT4A texCoords ;
	
};

And my input layout from cpu side :

D3D11_INPUT_ELEMENT_DESC layout[6];
	layout[indexLayout].SemanticName = "POSITION";
	layout[indexLayout].SemanticIndex = 0;
	layout[indexLayout].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;//16 bytes
	layout[indexLayout].InputSlot = 0;
	layout[indexLayout].AlignedByteOffset = 0;
	layout[indexLayout].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	layout[indexLayout].InstanceDataStepRate = 0;
	indexLayout++ ;

	layout[indexLayout].SemanticName = "NORMAL";
	layout[indexLayout].SemanticIndex = 0;
	layout[indexLayout].Format = DXGI_FORMAT_R16G16B16A16_FLOAT;//8 bytes
	layout[indexLayout].InputSlot = 0;
	layout[indexLayout].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
	layout[indexLayout].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	layout[indexLayout].InstanceDataStepRate = 0;
	indexLayout++ ;

	layout[indexLayout].SemanticName = "COLOR";
	layout[indexLayout].SemanticIndex = 0;
	layout[indexLayout].Format = DXGI_FORMAT_R8G8B8A8_UNORM;//DXGI_FORMAT_R32G32B32A32_FLOAT a la place ?? //4 bytes
	layout[indexLayout].InputSlot = 0;
	layout[indexLayout].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
	layout[indexLayout].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	layout[indexLayout].InstanceDataStepRate = 0;
	indexLayout++ ;

	layout[indexLayout].SemanticName = "TEXCOORD";
	layout[indexLayout].SemanticIndex = 0;
	layout[indexLayout].Format = DXGI_FORMAT_R8G8B8A8_UNORM;//4 bytes
	layout[indexLayout].InputSlot = 0;
	layout[indexLayout].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
	layout[indexLayout].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	layout[indexLayout].InstanceDataStepRate = 0;
	indexLayout++ ;

	layout[indexLayout].SemanticName = "SV_POSITION";
	layout[indexLayout].SemanticIndex = 0;
	layout[indexLayout].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;//laisser à 32 ? //16 bytes
	layout[indexLayout].InputSlot = 0;
	layout[indexLayout].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
	layout[indexLayout].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	layout[indexLayout].InstanceDataStepRate = 0;
	indexLayout++ ;

	layout[indexLayout].SemanticName = "POSITION";
	layout[indexLayout].SemanticIndex = 1;
	layout[indexLayout].Format = DXGI_FORMAT_R32G32B32A32_FLOAT; //16 bytes
	layout[indexLayout].InputSlot = 0;
	layout[indexLayout].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
	layout[indexLayout].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
	layout[indexLayout].InstanceDataStepRate = 0;
	indexLayout++ ;

And this is my vertex shader struct, the input paralmeters in hlsl:

struct vsIn
{
	float4 position : POSITION ;
 	float4 normal   : NORMAL ;
 	float4 color    : COLOR0 ;
	float4 texCoord : TEXCOORD0 ; 
};

Somebody can help me about data alignment, it seems to be wrong... sad.png If you need more infos/code, don't hesitate to ask me ! smile.png


Basic question about rendering in texture(s)

01 June 2013 - 03:14 AM

Hi !

I 'm asking me this question : coming from directx 9, the instructions have changed and I wondered and met difficulties to find what D3D11 methods must be used to specify a renderTarget as an input, and same question for an output ? I don't use .fx files at the moment.

 

PS : I render in 3 textures in my pixel shader. I 'd like to find how to bind render target to slots 1 and 2 for example.


PARTNERS