Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Corefanatic

Member Since 09 Apr 2010
Offline Last Active Feb 18 2014 08:06 AM

Topics I've Started

Normal Interpolation issues on my generated terrain

22 January 2013 - 05:10 PM

Hi all,
I am having a weird problem with normals on my generated terrain. I am not sure whether is it is shader or mesh issue, but here is how it looks:
 
 
 
 
Dx11FW%202013-01-22%2022-51-45-78.png
 
As you can see, I get this pattern along the edges of the triangles. This reminds me of per vertex shading  however I am aiming for per pixel shading.
 
Here are my vertex and pixel shaders:

 

VS:

cbuffer cbToProjection
{
	float4x4 matToProj;
}

struct VS_IN
{
	float4 Position : POSITION;
	float3 Normal: NORMAL;
};

struct VS_OUT
{
	float4 Position : SV_POSITION;
	float3 NormalWS: TEXCOORD1;
};

VS_OUT main(VS_IN IN)
{
	VS_OUT OUT;
	OUT.Position = mul(IN.Position,matToProj);
	OUT.NormalWS = normalize(IN.Normal);
	return OUT;
}

 

PS:

struct PS_IN
{
	float4 Position : SV_POSITION;
	float3 NormalWS: TEXCOORD1;
};

float4 main(PS_IN IN): SV_TARGET0 {
	float3 normal = normalize(IN.NormalWS);
	float3 toLight = normalize(float3(1,3,-2));
	float NDotL = saturate(dot(toLight,normal));
	float4 color = float4(1.0f,1.0f,1.0f,1.0f);	
	color.rgb *= NDotL;
	return color;
}

 

 

 

so what am I doing wrong? 


cbuffers lifetime in a pipeline

17 March 2012 - 09:34 AM

Hi there,
A quick question I have is about how long is a lifetime of a constant buffer within a frame. Let's assume all my vertex shaders have the same set of constant buffers:

cbuffer cbPerObject
{
float4x4 matWorld;
}

cbuffer cbPerFrame
{
float4x4 matView;
float4x4 matProj;
}

At the moment I naturally set the perObject cb with every object I draw, and perFrame with each new vertex shader I set to the pipeline. What I am wondering is, if my shaders have the the same layout , would the perFrame cb stay attached from one vertex shader to another?

Need advice on mesh format

04 March 2012 - 12:26 PM

Hi all,
for some time now I have been exporting meshes from Maya using my own plugin into my own binary format. The format is simply list of vertices and a index buffer.

Now I am thinking of upgrading and I ran into a few obstacles.

The first is that internally Maya meshes are stored in a format similar to OBJ files format. Until now, when exporting, I have iterated over faces' vertices and their indices into arrays of vertex positions, UV coordinates and normals. I would save each unique combination of these indices and gave it an unique integer for creation of index buffer. If my exporter encountered the same unique combination, it would simply add that combination's index into my index buffer and not add the vertex to the vertex buffer, therefore saving space as it is a shared vertex.

The problem I ran into there is that unless the two faces sharing a vertex are coplanar, the same vertex has multiple normals based on the normal of the face it belong to. The best example would be a cube, where each corner vertex has three different normals.

As you can imagine, in some cases this could result in vertex buffer larger that original number of vertices in a model and index buffer that never references a vertex twice.

I though of calculating a single normal for each vertex by averaging the normals of faces that share this vertex, however I can imagine this could result in wrong normals especially where hard edges are required.

Now I am thinking of using multiple buffers, ie. one containing Positions and UVs, and second containing normals. However, I am not sure how this would work, since I would need two index buffers, one indication unique positions and one for indication unique normal.

Maybe I am over thinking this, but surely there must be a better way than what I am using right now.

Thank you all for help.

Depth Buffer questions

20 February 2012 - 07:35 PM

Hi all,
I have been programming graphics for some time now, and recently a few questions started bugging me at the back of my mind:

1) When does the depth comparison for each fragment occur (in order for it to be potentially rejected)? Before or after it's value is calculated?

2)I know different vendors have different tech for early z rejection, but can't find any recent documents on how the trigger them and general good practice guides. Anyone good place to start?

3)Is it still beneficial to render opaque geometry from front to back?

I think that's all for now, hopefully your answers will fill a few holes in my knowledge :)

DX11: fail on Create Shader Resource View From Memory

20 February 2012 - 03:42 PM

Hi there,
I have a small problem. I have an array of floats and I need to create a dynamic, cpu-write 2D texture from it. For some reason I am getting E_FAIL.

Here's the code:


m_renderBuffer = new float[m_width*m_height*4];
memset((void*)m_renderBuffer,0,sizeof(float)*4*m_height*m_width);

D3DX11_IMAGE_LOAD_INFO info;

info.Width = m_width;
info.Height = m_height;
info.Depth = 1;
info.FirstMipLevel = 0;
info.MipLevels = 1;
info.BindFlags = D3D11_BIND_SHADER_RESOURCE;
info.CpuAccessFlags = D3D11_CPU_ACCESS_WRITE;
info.MipFilter = D3DX11_FILTER_NONE;
info.MiscFlags = 0;
info.Usage = D3D11_USAGE_DYNAMIC;
info.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;

HRESULT result = D3DX11CreateShaderResourceViewFromMemory(g_Renderer.GetD3D11Device(),
(void*)m_renderBuffer,
sizeof(float)*4*m_height*m_width,
&info,
NULL,
&m_d3d11Texture,
NULL);



Thx for any help ;)

PARTNERS