Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


AdeptStrain

Member Since 19 Mar 2012
Offline Last Active Apr 10 2015 08:47 PM

Posts I've Made

In Topic: Compute/DrawIndirect question

19 January 2015 - 01:12 PM

One other question, MJP. If you have a buffer bound as an AppendStructureBuffer in one shader, can it be just a simple "read only" StructureBuffer in another shader, or do you have to bind is as a RWStructureBuffer even if you have no intention of writing to it?


In Topic: Compute/DrawIndirect question

18 January 2015 - 12:40 PM

Ah, I see. So you use the SV_VertexID (which I assume comes from the draw indirect params) as an index into the structured buffer, grab the data, and pass it along the pipeline from there. That makes sense. Thanks MJP.


In Topic: Terrain tesselation and collision

16 December 2014 - 11:27 AM

As Kuana said, keep a fully detailed mesh as your physics mesh (heightmap, whatever) which is separate from your visual mesh and have it loaded as long as anything can interact with it.

 

As for your lighting and shadows, as long as your lighting and shadows uses the depth buffer/lighting buffer or what-have-you, then it should be fine.


In Topic: SV_IsFrontFace and geometry created from points.

29 October 2014 - 07:47 PM


Edit: Looking more closely at your code I'm concerned about one thing. You're not using consistent transformations: You use both mul(matrix, vector) and mul(vector, matrix). This is not a problem per se, HLSL allows both. It's just confusing when to use a transpose on the app side for the cbuffer matrices (or when to tag them with row_major/column_major in HLSL). Keep it consistent, best throughout all your shaders.

 

 

Yea, I've been trying to be stricter about using mul(matrix, vector). I agree jumping between them is bad form. I should point out I haven't done any clean up in this code so I'm sure some things are a bit rough.

 

Here's the image showing the behavior in question:

 

Attached File  gLighting.png   643.36KB   1 downloads

 

You can see there are some very dark fins parallel to lit ones. I agree this is most likely a normal problem, I've attached all the shader code below, but the normal calculation is pretty simple (unless I have my vert indices wrong...). I'd love a simple way to draw normals but since this geometry is created in the shader - I haven't thought of a great way to do that yet.

 

Shader:

// Un-tweakables
float4x4 World					: World;
float4x4 WorldView				: WorldView;
float4x4 WorldViewProjection	: WorldViewProjection;
float Time						: Time;

// Textures
Texture2D<float4> HeightmapTexture;
Texture2D<float4> FoliageMapTexture;
Texture2D<float4> DiffuseMapTexture;
Texture2D<float4> NormalMapTexture;
Texture2D<float4> NoiseTexture;

//Tweakables
float MaxHeight;
uint HeightmapTextureWidth;
uint HeightmapTextureHeight;
float HeightmapVertexStride;
float4x4 TerrainWorldMatrix;
float GrassHeight;
float GrassWidth;


// Static Values
static const float  MaxDistance = 512.0f;
static const uint   TotalLODSelections = 10;
static const uint   LODSelection[TotalLODSelections] = { 0, 1, 1, 2, 2, 3, 3, 3, 3, 3 }; // probably should replace this with some inverse log N value.
static const uint   TotalLODs = 4;
static const int    NumSegs[TotalLODs] = { 5, 3, 2, 1 };


// Structures
struct VSInput
{
	float3 Position		: POSITION;
	float4 Uv : TEXCOORD0;
};

struct VSOutput
{
	float4 Position			  : SV_POSITION;
	float2 Uv				  : TEXCOORD0;
	float4 WorldPositionDepth : TEXCOORD1;
	float3 Normal  : TEXCOORD2;
};

sampler LinearClampSampler
{
	Filter = Min_Mag_Mip_Linear;
	AddressU = Clamp;
	AddressV = Clamp;
};

sampler LinearWrapSampler
{
	Filter = Min_Mag_Mip_Linear;
	AddressU = Wrap;
	AddressV = Wrap;
};

sampler PointClampSampler
{
	Filter = Min_Mag_Mip_Point;
	AddressU = Clamp;
	AddressV = Clamp;
};

// Helper Methods
float GetHeightmapValue(float2 uv)
{
	float4 rawValue = HeightmapTexture.SampleLevel(PointClampSampler, uv, 0);
	return rawValue.r * MaxHeight;
}


float3 CalculateNormal(float3 v0, float3 v1, float3 v2)
{
	float3 edgeOne = normalize(v1 - v0);
	float3 edgeTwo = normalize(v2 - v0);
	
	return cross(edgeOne, edgeTwo);
}

float4 GetCornerHeights(float2 uv)
{
	float texelWidth = 1.0f / (float)(HeightmapTextureWidth + 1);
	float texelHeight = 1.0f / (float)(HeightmapTextureHeight + 1);

	float2 topRight = uv + float2(texelWidth, 0.0f);
	float2 bottomLeft = uv + float2(0.0f, texelHeight);
	float2 bottomRight = uv + float2(texelWidth, texelHeight);

	float x = GetHeightmapValue(uv);
	float y = GetHeightmapValue(topRight);
	float z = GetHeightmapValue(bottomLeft);
	float w = GetHeightmapValue(bottomRight); 
	
	float4 returnValue = float4(x,y,z,w);
	
	return returnValue;
}

float3 GetGrassCellOffset(VSInput input)
{
	float4 cornerHeights = GetCornerHeights(input.Uv.xy);

	// TODO: FIX ME
       // Bilinear interpolation between heightmap cells.
	float yPos = lerp(lerp(cornerHeights.x, cornerHeights.y, input.Uv.z),
					  lerp(cornerHeights.z, cornerHeights.w, input.Uv.z), input.Uv.w);

	float3 returnPos = input.Position.xyz;
	returnPos.y = yPos;

	return returnPos;
}

uint CalculateLOD(float3 distanceToCamera)
{
	uint LODIndex = (uint)floor(distanceToCamera * 10.0f);
	LODIndex = clamp(LODIndex, 0, TotalLODSelections - 1);
	uint totalSegments = NumSegs[LODSelection[LODIndex]];
	return totalSegments;
}

float2 CalculateGrassDirection(VSInput IN)
{
	// Random seed is stored in the Y Component.
	float rotRadians = IN.Position.y;
	float2 buildDirection = { 1.0f, 0.0f };
	float cosV = cos(rotRadians);
	float sinV = sin(rotRadians);
	float2x2 rotMatrix = { cosV, -sinV,
						   sinV, cosV };
	buildDirection = mul(rotMatrix, buildDirection);
	return buildDirection;
}

VSInput DeferredRenderVS(VSInput IN)
{
	// TODO: Add an early out if our foliage map returns 0.0f.
	VSInput outVS = (VSInput)0;

	outVS.Position = IN.Position;
	outVS.Uv = IN.Uv;

	return outVS;
}


[maxvertexcount((NumSegs[0] + 1) * 2)] // Num Segements at lowest LOD * Num Of Triangles per segement * Num of Verts per triangle 
void GrassGS(point VSInput input[1], inout TriangleStream<VSOutput> TriStream)
{
	const uint MaxSegments = NumSegs[0];
	const uint MaxVerts = (MaxSegments + 1) * 2;

	float2 grassDir = normalize(CalculateGrassDirection(input[0]));

	const float3 bladePositionInCellSpace = GetGrassCellOffset(input[0]);
	const float4 posInObjectSpace = float4(bladePositionInCellSpace, 1.0f);
	const float terrainHeight = bladePositionInCellSpace.y;
	const float3 bladePosInWorldSpace = mul(World, bladePositionInCellSpace).xyz;

	float eyeDist = length(EyePosition.xz - bladePosInWorldSpace.xz) / MaxDistance;
	eyeDist = saturate(eyeDist); // Clamp to 0 - 1

	uint totalSegments = CalculateLOD(eyeDist);
	
	float scaledGrassHeight = GrassHeight * (1.0f - eyeDist);
	scaledGrassHeight = clamp(scaledGrassHeight, 0.1f, 1.0f);

	float segmentHeightStep = scaledGrassHeight / (float)totalSegments;
	float scaledGrassWidth = lerp(GrassWidth * 2.0f, GrassWidth, (1.0f - eyeDist));
	float grassHalfWidth = scaledGrassWidth * 0.5f;
	
	float vStep = 1.0f / (float)(totalSegments + 1);

	const uint totalVerts = (totalSegments + 1) * 2;
	VSOutput createdVerts[MaxVerts]; // Two verts per segment

	for (uint i = 0; i < totalVerts; i +=2)
	{
		createdVerts[i].Position = posInObjectSpace;
		createdVerts[i].Position.xz -= grassDir * grassHalfWidth;
		createdVerts[i].Position.y = terrainHeight + ((float)i * segmentHeightStep);
		createdVerts[i].Uv = float2(0.0f, 1.0f - ((float)i * vStep));
		createdVerts[i].WorldPositionDepth = float4(mul(World, createdVerts[i].Position).xyz, -mul(WorldView, createdVerts[i].Position).z);
		createdVerts[i].Position = mul(WorldViewProjection, createdVerts[i].Position);

		createdVerts[i + 1].Position = posInObjectSpace;
		createdVerts[i + 1].Position.xz += grassDir * grassHalfWidth;
		createdVerts[i + 1].Position.y = terrainHeight + ((float)i * segmentHeightStep);
		createdVerts[i + 1].Uv = float2(1.0f, 1.0f - ((float)(i + 1) * vStep));
		createdVerts[i + 1].WorldPositionDepth = float4(mul(World, createdVerts[i + 1].Position).xyz, -mul(WorldView, createdVerts[i + 1].Position).z);
		createdVerts[i + 1].Position = mul(WorldViewProjection, createdVerts[i + 1].Position);
	}
	
	float3 segmentNormals[MaxSegments];
	
	for (uint j = 0; j < totalSegments; ++j)
	{
		int vIndex0 = j * 2;
		int vIndex1 = (j + 1) * 2;
		int vIndex2 = vIndex1 - 1;
		segmentNormals[j] = CalculateNormal(createdVerts[vIndex0].WorldPositionDepth.xyz, createdVerts[vIndex1].WorldPositionDepth.xyz, createdVerts[vIndex2].WorldPositionDepth.xyz);
	}

	for (uint k = 0; k < totalVerts; ++k)
	{
		// Use the normal of the segment above it.
		//uint normalIndex = clamp(k / 2, 0, totalSegments - 1);
		createdVerts[k].Normal = segmentNormals[0]; // HACK HACK HACK HACK
		TriStream.Append(createdVerts[k]);
	}

}

PSDeferredOutput DeferredRenderFP(VSOutput In , bool isFrontFace: SV_IsFrontFace)
{
         // Flip normals on backfaces.
	float3 normal = normalize(lerp(-In.Normal, In.Normal, float(isFrontFace)));

	// No gloss or specular.
	float specPower = 0;
	float gloss = 0;

	float4 colour = float4(0.0f, 1.0f, 0.0f, 1.0f); // Remove - this is just for debugging.
	//float4 colour = DiffuseMapTexture.SampleLevel(LinearClampSampler, In.Uv, 0);
	colour.w *= gloss;

	float3 viewSpaceNormal = normalize(mul(WorldView, float4(normal.xyz, 0)).xyz);

	PSDeferredOutput Out;

	// Albedo Colour.xyz, Emissiveness
	Out.Colour = float4(colour.xyz, 0.0f);
	// Normal.xyz, Gloss
	Out.NormalDepth = float4(viewSpaceNormal.xyz * 0.5f + 0.5f, colour.w);
	return Out;
}


In Topic: SV_IsFrontFace and geometry created from points.

29 October 2014 - 08:44 AM

 


I've been trying to debug an issue where SV_IsFrontFace doesn't seem to be returning anything but "true" ... any suggestions on ways around this problem?

 

Can you describe how you came to your conclusion? That is, what are the symptoms for the "problem?" Hopefully something more specific than "doesn't seem.." wink.png

 

 

Apologies for the non-descript language in the original post. The fins are all lit on one side, but completely black on the other. This is easiest to see when they are randomly rotated as a black fin will occasionally be next to a properly lit fin. I'll try and attach a screenshot later tonight when I can get a few moments at my home PC since I don't have my code sync'd at my current location.

 

@Unbird:

 

Reference device is a good idea as well, I can try that. I've been using RenderDoc to step through some of the improperly lit fins and it never seemed to flip the normal. The lerp is just me trying to avoid a branch statement in the pixel shader - glad to know it works. I'll try stepping through with the reference device and see what I find.

 

I'm using a deferred renderer which uses view space normals, but I wouldn't think that would matter as long as the normal is properly flipped before I move the normal into view space.


PARTNERS