Jump to content

  • Log In with Google      Sign In   
  • Create Account


korvax

Member Since 20 Jun 2012
Offline Last Active Apr 16 2014 11:04 AM

Topics I've Started

How is a system with render dependent resources structured.

11 April 2014 - 06:03 AM

Hi, this question is more of a structural and code based.

Lets say you have some sort of engine in lack of a better name, has as a rendersystem.  The Rendersystem could be OpenGL or DX based.

The Engine have knowleage of some of the common render objects, like Shader, ShaderView, meshes and so on. It does not now what a OpenGL and DX Shader is. How does the more professional systems handle this?

 

I can think of two solutions, one is that the rendersystem has a list of all DX Shader and the Shaders in the engine has an ID to that list (alt 1),

Or that OpenGL Shader would be a subclass of the Engine Shader and cast to a OpenGL Shader every time its beeing used in the Render system (alt 2).

 
(alt 1)

void OpenGL::ShaderInput(EngineShader* pShader)

{

OpenGL* pOpenGLShader = GetOpenGLShader(pShader->ID);

}

 

(alt2)

void OpenGL::ShaderInput(EngineShader* pShader)

{

OpenGL* pOpenGLShader = static_cast<OpenGLShader>(pShader);

DoSomething(pOpenGLShader)

}

 

Just think both solutions seems a slow and maybe a bit wasteful. 

any thoughts pls?


Defered Render structure.

30 October 2013 - 03:34 PM

Hi,

 

I render each object to  G-Buffer  after that I should render each light in the scene to the lightbuffer, and the result should be blended so the lightbuffer is a blended combination of all lights in the scene correct.

 

So how do I blend the lightbuffers renderpass? Is this something that i should program in the shaderworld or in the cpu world? And how would a typical very simple lightbuffer look like should that just be a pixel shader?

 

Also currently my setup works fine for 1 light (:-P) and it goes something like this.

 

1) Render all object to the G-Buffer one by one, each object has the G-Buffer as there primary shader

2) Render a Quad with the light buffer as a shader and a light as input. If i would start render 1000 lights I cant render

the Quad 1000 times, so I should just run the "light shader" a 10000s without any geometry somehow and then as a third final step render the  a Quad and then apply the blend result from the light shader to that quad, how would I do that?

// Lightbuffer
//--------------------------------------------------------------------------------
Texture2D txDiffuse : register(t0);
Texture2D txNormal : register(t1);
Texture2D txDepth : register(t2);
Texture2D txSpecular : register(t3);
//--------------------------------------------------------------------------------
SamplerState samLinear0 : register( s0 );
SamplerState samLinear1 : register( s1 );
SamplerState samLinear2 : register( s2 );
SamplerState samLinear3 : register( s3 );
//--------------------------------------------------------------------------------
cbuffer cbLight : register(b0)
{
	float4 lightDirection;
	float4 lightColor;
	float4 lightRange;	
}
//--------------------------------------------------------------------------------
struct PS_INPUT
{
  float4 position	 : SV_POSITION;
  float2 texcoord	 : TEXCOORD0;	
	};
//--------------------------------------------------------------------------------
float4 main(PS_INPUT input) : SV_Target
{
	float4 diffuse = txDiffuse.Sample(samLinear0, input.texcoord);
	float4 normal = txNormal.Sample(samLinear1, input.texcoord);
	float4 depth = txDepth.Sample(samLinear2, input.texcoord);
	float4 specular = txDepth.Sample(samLinear3, input.texcoord);
	float irradiance = saturate(dot(normal, -lightDirection));
	return lightColor*irradiance*diffuse;
}


// G-BUFFER
Texture2D txDiffuse : register( t0 );
SamplerState samLinear : register( s0 );

cbuffer cbPerObject : register(b0)
{
	float4 diffuse;
	float4 specular;
	bool isTextured;	
}

//--------------------------------------------------------------------------------------
struct PS_INPUT
{
  float4 position	 : SV_POSITION;
  float2 texcoord	 : TEXCOORD0;	
	float4 normal	: NORMAL;
};
//--------------------------------------------------------------------------------------
struct PSOutput
{
	float4 Color : SV_Target0;
  float4 Normal : SV_Target1;	
	float4 Depth : SV_Target2;
	float4 Specular : SV_Target3;
	
};
//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
PSOutput main(PS_INPUT input)  
{
	PSOutput output;
	output.Color = diffuse*txDiffuse.Sample(samLinear, input.texcoord);
	output.Normal = normalize(input.normal);
	output.Specular = specular;
	output.Depth =  input.position.z / input.position.w;
	return output;
}

any help is much appreciated.


Defered Render output, repeated.

29 October 2013 - 02:30 PM

Hi,

I have the basic for a simple defered render up and running. I render the end result on a "full screen quad" with the output as a ShaderView not sure if this is the best way to do it, if not what is? My problem thought as you can see in the picture, that the result it repeated and not "stretched out". I think this has to do something with the texture or ShaderView.. but I cant find the problem.

 

This is my Quad.

UINT uFullWidth = 1024;
UINT uFullHeight = 768;
Vertex vertices_fullquad[] = {
{ XMFLOAT3(-1.0f*uFullWidth, 1.0f*uFullHeight, 0.0f), XMFLOAT2(0.0f, 0.0f), XMFLOAT3(0.0f, 1.0f, 0.0f) },
{ XMFLOAT3(1.0f*uFullWidth, 1.0f*uFullHeight, 0.0f), XMFLOAT2(1.0f*uFullWidth, 0.0f), XMFLOAT3(0.0f, 1.0f, 0.0f) },
{ XMFLOAT3(-1.0f*uFullWidth, -1.0f*uFullHeight, 0.0f), XMFLOAT2(0.0f, 1.0f*uFullHeight), XMFLOAT3(0.0f, 1.0f, 0.0f) },
{ XMFLOAT3(1.0f*uFullWidth, -1.0f*uFullHeight, 0.0f), XMFLOAT2(1.0f*uFullWidth, 1.0f*uFullHeight), XMFLOAT3(0.0f, 1.0f, 0.0f) },
};

TArray<WORD> waLightIndices;
quad_indices.add(0); quad_indices.add(1); quad_indices.add(3); quad_indices.add(0); quad_indices.add(2); quad_indices.add(3);

RenderTarget Texture

	ID3D11Texture2D* pTexture = nullptr;
	// Create a render target view
	D3D11_TEXTURE2D_DESC descTarget;
	ZeroMemory(&descTarget, sizeof(descTarget));
	descTarget.Width = uWidth;
	descTarget.Height = uHeight;
	descTarget.MipLevels = 1;
	descTarget.ArraySize = 1;
	descTarget.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
	descTarget.SampleDesc.Count = 1;
	descTarget.Usage = D3D11_USAGE_DEFAULT;
	descTarget.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
	descTarget.CPUAccessFlags = 0;
	descTarget.MiscFlags = 0;
	HRESULT hr = m_pDevice->CreateTexture2D(&descTarget, nullptr, &pTexture);

ShaderView

	pTexture->GetDesc(&descTarget);
	D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
	srvDesc.Format = descTarget.Format;
	srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
	srvDesc.Texture2D.MostDetailedMip = 0;
	srvDesc.Texture2D.MipLevels = 1;

Light-Shader

//----------------------------------------------------------------------------------------------
Texture2D txDiffuse : register(t0);
Texture2D txNormal : register(t1);
Texture2D txDepth : register(t2);
Texture2D txSpecular : register(t3);
//----------------------------------------------------------------------------------------------
SamplerState samLinear0 : register( s0 );
SamplerState samLinear1 : register( s1 );
SamplerState samLinear2 : register( s2 );
SamplerState samLinear3 : register( s3 );
//----------------------------------------------------------------------------------------------
cbuffer cbLight : register(b0)
{
	float4 lightDirection;
	float4 lightColor;
	float4 lightRange;	
}
//---------------------------------------------------------------------------------------------------------------------
struct PS_INPUT
{
  float4 position	 : SV_POSITION;
  float2 texcoord	 : TEXCOORD0;	
	};
//---------------------------------------------------------------------------------------------------------------------
float4 main(PS_INPUT input) : SV_Target
{
	float4 diffuse = txDiffuse.Sample(samLinear0, input.texcoord);
	float4 normal = txNormal.Sample(samLinear1, input.texcoord);
	float4 depth = txDepth.Sample(samLinear2, input.texcoord);   //Not used currently
	float4 specular = txDepth.Sample(samLinear3, input.texcoord); //Not used currently
	float irradiance = saturate(dot(normal, -lightDirection));
	return lightColor*irradiance*diffuse;
}

G-Buffer

Texture2D txDiffuse : register( t0 );
SamplerState samLinear : register( s0 );
cbuffer cbPerObject : register(b0)
{
	float4 diffuse;
	float4 specular;
	bool isTextured;	
}

//--------------------------------------------------------------------------------------
struct PS_INPUT
{
  float4 position	 : SV_POSITION;
  float2 texcoord	 : TEXCOORD0;	
  float4 normal          : NORMAL;
};
//--------------------------------------------------------------------------------------
struct PSOutput
{
  float4 Color    : SV_Target0;
  float4 Normal   : SV_Target1;	
  float4 Depth    : SV_Target2;
  float4 Specular : SV_Target3;
	
};
//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
PSOutput main(PS_INPUT input)  
{
	PSOutput output;
	output.Color = diffuse*txDiffuse.Sample(samLinear, input.texcoord);
	output.Normal = normalize(input.normal);
	output.Specular = specular;
	output.Depth =  input.position.z / input.position.w;
	return output;
}

How to structure Sampler States within the system

24 October 2013 - 12:55 AM

Hi,

this is more a question regarding architecture or how structure I should structure the my system. In a more real production render system, is it common to use multiple samplers states at the same time or is it just the same one thats being used on multiple times.

 

As its now, I have a Rendeble3D Class that has a Material. Material can have both a Texture Class (ShaderView) and a Shader Class.

As of now i have some different shaders (all of this is test and for my learning) and they can have a multi sampler states bound to them.

So should I structure it so that each Shader Class has one/multiple sampler state or should sampler state be a more global setting in the system.  With the second global option it will be hard to specify which sampler state the each shader should be using, not sure if thats a common problem?

 

All thoughts and feedback or real life examples is much appreciated.


Strange heap problem when deleting an array

08 October 2013 - 06:47 AM

Hello

i have an array

int * i = new int[1000];

 

but when Im trying to delete it, Im getting an assertion failed and that the heap is corrupted, 

 

delete[] i;

 

 

I realize that this is probably not the best way to use an array, but i want to know how to delete this particulate problem,

I read some where that you can just sett i=0; and then delete it, but that wouldn't free any memory, just deleting pointer 0.

 

any one pls`?

 


PARTNERS