Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


lukabratzi

Member Since 04 Feb 2011
Offline Last Active Mar 27 2014 12:55 PM

Topics I've Started

Matching Tessellation Factors Along Edges

10 January 2014 - 01:25 PM

Hey everyone,

 

For the past few weeks or so I've been working on setting up tessellation with displacement mapping in my deferred renderer.  Everything works, the displacement is correct, etc, etc but the one thing that's been giving me terrible difficulty is calculating my tessellation factors for my triangles based on the distance from the camera.

 

In particular I'm seeing a lot of gaps between triangle edges whose tessellation factors are different.  So I know that at its most basic level, the problem is that adjacent triangles that "share" an edge (share as in the triangles don't share vertices but just have edges at the same position) aren't being calculated to have the same tessellation factors.  The part that has me stumped is that by my understanding I should be handling this case correctly.

 

The two adjacent triangles are actually two separate sets of 3 vertices.  I know its not efficient and could be better used if I had just 4 vertices and 6 indices, but the 4 vertices that make up a set of 2 adjacent edges should be identical in terms of position, so I don't think that could be the source of the problem.  But in the interest of completion here's the code that creates the mesh:

    int cellCount = 25;
    m_totalSize = 320.f;
    float cellSize = m_totalSize / cellCount;

    std::vector<Vertex> finalVertices(6 * cellCount * cellCount);
    float deltaX = cellSize, deltaZ = cellSize;
    float deltaU = 1.f / cellCount, deltaV = 1.f / cellCount;

    float minX = -m_totalSize * 0.5f, maxX = minX + deltaX;
    float minU = 0.f, maxU = deltaU;
    float minZ = -m_totalSize * 0.5f, maxZ = minZ + deltaZ;
    float minV = 0.f, maxV = deltaV;

    int startingIndex = 0;
    for(int row = 0; row < cellCount; ++row)
    {
        for(int col = 0; col < cellCount; ++col)
        {
            finalVertices[startingIndex++].SetParameters( maxX, 0.f, maxZ, maxU, maxV);
            finalVertices[startingIndex++].SetParameters( minX, 0.f, minZ, minU, minV);
            finalVertices[startingIndex++].SetParameters( minX, 0.f, maxZ, minU, maxV);

            finalVertices[startingIndex++].SetParameters( minX, 0.f, minZ, minU, minV);
            finalVertices[startingIndex++].SetParameters( maxX, 0.f, maxZ, maxU, maxV);
            finalVertices[startingIndex++].SetParameters( maxX, 0.f, minZ, maxU, minV);

            minX += deltaX;
            maxX += deltaX;
            minU += deltaU;
            maxU += deltaU;
        }

        minZ += deltaZ;
        maxZ += deltaZ;
        minV += deltaV;
        maxV += deltaV;

        minX = -m_totalSize * 0.5f;
        maxX = minX + deltaX;
        minU = 0.f;
        maxU = deltaU;
    }

My Hull constant function takes in a patch of 3 points for each triangle and calculates the midpoint of each edge from those three points.  It then calculates the distance from the position of the camera and each midpoint and uses that distance to lerp between a minimum and a maximum tessellation factor (each of which has a corresponding range associated with it).  Since this is where the actual tessellation factors are calculated I'm guessing there's a good chance that its the culprit.  Here is the code from that portion of my shader:

float maxDistance = 150.0f;
float minDistance = 0.f;

HS_CONSTANT_FUNC_OUT HSConstFunc(InputPatch<VS_OUTPUT, 3> patch, uint PatchID : SV_PrimitiveID)
{
HS_CONSTANT_FUNC_OUT output = (HS_CONSTANT_FUNC_OUT)0;

float distanceRange = maxDistance - minDistance;
float minLOD = 1.0f;
float maxLOD = 32.0f;

float3 midpoint01 = patch[0].Position + 0.5 * (patch[1].Position - patch[0].Position);
float3 midpoint12 = patch[1].Position + 0.5 * (patch[2].Position - patch[1].Position);
float3 midpoint20 = patch[2].Position + 0.5 * (patch[0].Position - patch[2].Position);
float3 centerpoint = (patch[0].Position + patch[1].Position + patch[2].Position) / 3;

// calculate the distance from camera position to each edge midpoint and the center of the triangle
float e0Distance = distance(cameraPosition, midpoint01) - minDistance;
float e1Distance = distance(cameraPosition, midpoint12) - minDistance;
float e2Distance = distance(cameraPosition, midpoint20) - minDistance;
float eIDistance = distance(cameraPosition, centerpoint) - minDistance;

float tf0 = lerp(minLOD, maxLOD, (1.0f - (saturate(e0Distance / distanceRange))));
float tf1 = lerp(minLOD, maxLOD, (1.0f - (saturate(e1Distance / distanceRange))));
float tf2 = lerp(minLOD, maxLOD, (1.0f - (saturate(e1Distance / distanceRange))));
float tfInterior = lerp(minLOD, maxLOD, (1.0f - (saturate(eIDistance / distanceRange))));

output.edgeTesselation[0] = tf0;
output.edgeTesselation[1] = tf1;
output.edgeTesselation[2] = tf2;
output.insideTesselation  = tfInterior;

return output;
}

Assuming that the math is correct, that makes me think that the other problem area could be my hull shader's partitioning method.  Currently I'm using the integer method as it seems the simplest and easiest to debug, though I know that it will eventually lead to visual popping.  

[domain("tri")]
[partitioning("integer")]
[outputtopology("triangle_cw")]
[outputcontrolpoints(3)]
[patchconstantfunc("HSConstFunc")]
HS_OUTPUT HSMain(InputPatch<VS_OUTPUT, 3> patch, uint i : SV_OutputControlPointID)
{
HS_OUTPUT output = (HS_OUTPUT)0;

output.Position = patch[i].Position;
output.UVCoords = patch[i].UVCoords;
output.tc1 = patch[i].tc1;
output.tc2 = patch[i].tc2;
output.normal = patch[i].normal;

return output;
}

Could it be that the integer partitioning method is the cause for the gaps?  I know that if you specify a tessellation factor of 3.1 for an integer partitioning method, it gets bumped down to a value of 3.***  But even in this case it confuses me that any two edges between two sets of identical points would return differing tessellation factors.

 

Thanks to anyone who takes a look.  Let me know if I can provide any other code or explain anything.

 

*** I have a very basic and disgusting understanding of the various tessellation partitioning types.  If some kind, knowledgeable stranger happens to understand these and wants to throw a bit of an explanation for the advantages/disadvantages of pow and fractional_even/odd then that would be amazing.


[Solved]Deferred Rendering - Point lights only half shaded?

09 December 2013 - 07:38 PM

Hey everyone,

 

Still working on my deferred renderer and have everything working for my point lights except it seems that only half of the point lights are being properly lit.  Hopefully the attached image will show the problem I'm facing.

 

Now the first thing that I can think of is that it maybe my normals are getting corrupted between when I draw my scene and when I do the lighting calculations?  The reason I think this is because I have to convert the normals from the [-1, 1] range to the [0, 1] range before I render them to my normal render target (in the "scene" shader) and then back into the [-1, 1] range when I perform my lighting calculations in the light shader.

 

I did some searching and it looks like I have the same issue as http://www.gamedev.net/topic/627587-solved-pointlight-renders-as-halfsphere/ but that thread was never updated with what the actual solution/problem was.

 

I've looked at my shader code and my render target format for my normals yesterday and this evening and haven't been able to figure it out and was hoping someone could lend a second set of eyes over some of my code.

Normal Render Target Creation Parameters:

ID3D11Texture2D* text;
D3D11_TEXTURE2D_DESC desc;
desc.MipLevels = 1; 
desc.ArraySize = 1;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
desc.Format = DXGI_FORMAT_R8G8B8A8_SNORM;
desc.Width = width;
desc.Height = height;
"Scene" Shader:

// Vertex shader code that outputs the normal in world space
output.Normal = normalMap.SampleLevel(samplerStateLinear, output.UVCoords, 0.0f).xyz;
output.Normal = mul(output.Normal, World);

// Pixel shader code that transforms the normals to [0, 1] domain and then stores in the render target
// transform the normal domain
output.Normal = input.Normal;
output.Normal = 0.5f * (normalize(output.Normal) + 1.0f);
Light Pixel Shader:

float4 PSMain(VS_OUTPUT input) : SV_Target
{
	input.ScreenPosition.xy /= input.ScreenPosition.w;
	float2 texCoord = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1);

        // read normal data from normal RT
	float4 normalData = normaltex.Sample(samplerStateLinear, texCoord);
        // map back to [-1, 1] range and normalize
	float3 normal = 2.0f * normalData.xyz - 1.0f;
	normal = normalize(normal);

	float depthVal = depthtex.Sample(samplerStateLinear, texCoord).r;

	float4 position;
	position.xy = input.ScreenPosition.xy;
	position.z = depthVal;
	position.w = 1.0f;

	position = mul(position, InvViewProjection);
	position /= position.w;

	float3 lightVector = lightPosition.xyz - position;
	float attenuation = saturate(1.0f - length(lightVector)/lightRadius.x);

        // calculate the dot product of surface normal and light vec
	lightVector = normalize(lightVector);
	float dp = dot(lightVector, normal);

	float3 diffuseLighting = lightColor.xyz * dp;

	return float4(diffuseLighting, 1.0f) * attenuation * lightIntensity;
}

Thank you for any help/thoughts you might have.


DX11 - Disabling depth testing?

08 December 2013 - 05:55 PM

Hey everyone,

 

I'm working on changing my DX11 forward renderer to be a deferred renderer and have everything implemented but am having difficulty figuring out how to disable the depth testing when it comes time to render my lights (which I'm rendering as billboarded quads that always face the camera).

 

Here's how I create my depth stencil view:

virtual DepthStencilView* CreateDepthStencilView( int width, int height )
{
	DepthStencilViewD3D* dsv = new DepthStencilViewD3D( );
			
	D3D11_TEXTURE2D_DESC desc =
	{
		width, height,
		1, 1,
		DXGI_FORMAT_D32_FLOAT,
		{ 1, 0 },
		D3D11_USAGE_DEFAULT,
		D3D11_BIND_DEPTH_STENCIL,
		0, 0
	};
	ID3D11Texture2D *tex = NULL;
	HV(device->CreateTexture2D(&desc, NULL, &tex), "Creating texture for depth/stencil buffers" );
	
	HV(device->CreateDepthStencilView(tex, NULL, &dsv->dsv), "Creating depth stencil view." );

	CreateDepthStencilState(true);

	return dsv;
}
virtual void CreateDepthStencilState(bool depthTestEnabled)
{
	D3D11_DEPTH_STENCIL_DESC dsDesc;

	dsDesc.DepthEnable = depthTestEnabled;
	dsDesc.DepthWriteMask = depthTestEnabled ? D3D11_DEPTH_WRITE_MASK_ALL : D3D11_DEPTH_WRITE_MASK_ZERO;
	dsDesc.DepthFunc = depthTestEnabled ? D3D11_COMPARISON_LESS : D3D11_COMPARISON_ALWAYS;

	// Stencil test parameters
	dsDesc.StencilEnable = false;
	dsDesc.StencilReadMask = 0xFF;
	dsDesc.StencilWriteMask = 0xFF;

	// Stencil operations if pixel is front-facing
	dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
	dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
	dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
	dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;

	// Stencil operations if pixel is back-facing
	dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
	dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
	dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
	dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;

	// Create depth stencil state
	ID3D11DepthStencilState * pDSState;
	device->CreateDepthStencilState(&dsDesc, &pDSState);
}

And before I render the lights I call:

renderer.CreateDepthStencilState(false);

With the expected result being that my light quads will be drawn without any consideration of depth testing.  Unfortunately it looks like that's not the case and as far as I can tell, my changes are having no effect and depth testing is still being done.  Anyone have experience with this that can tell me what I'm doing wrong?

 

Thanks

 


AI Visibility test in 2D

21 August 2013 - 05:09 PM

I was a dingus and posted this in the gameplay section as I thought it would be more relevant to that forum but then saw all the threads about visibility in here and wanted to cross post it.  Sorry if this is a foul.  If this is a huge issue feel free to delete and close the thread in the gameplay forum:

http://www.gamedev.net/topic/646914-help-with-visibility-test-for-ai-in-2d/

 

Hey everyone,

 

I'm working with some friends on a top down, stealth based, 2D tiled game.  One of the issues we're facing is how to have the AI detect that they can see the player.  

 

The reason this is difficult is because our levels have the concept of layers (ie: climb a ladder in the scene to get from the lower layer (1) to the upper layer of the scene (2).  This factors into gameplay because we want the AI to be able to see players that are beneath them, unless the players are against the wall or close enough to the wall that the AI shouldn't be able to see them.  See the attached image for an example, red is the AI, green is the player, and yellow is the detectable range that the AI can see.  Note how the ground cuts off the AI's frustum.

 

The issue is that half the team wants to push for using a 3D collision system in our 2D game.  Meaning everything in the game will have a 3D collision shape in this 2D game where the height is nonvariable except for when you're on different layers.

 

This seems wasteful to me and I'm trying to think of a way that this system can be implemented simply using 2D AABB's and basic trigonometry.  I'm trying to come up with a solution for this in 2D and wanted to see if anyone else had some thoughts.  Its safe to assume  that we know the height difference between the ai and the player, the distance of the AI from the edge, and the distance of the player from the bottom of the cliff.  The map is tiled and it would be fairly trivial to map the player/ai to the nearest tile (the sacrifice of accuracy isn't an issue) so its possible that there's some use in mapping the ai and player to a tile and figuring out visible tiles from that info.

 

This may be the one problem that we can't solve in 2D and that requires us to go with a 3D system for checking, but I'd really like to avoid that if at all possible.  

 

If I can clear anything up or give more detail please feel free to let me know.


Help with visibility test for AI in 2D

21 August 2013 - 04:38 PM

Hey everyone,

 

I'm working with some friends on a top down, stealth based, 2D tiled game.  One of the issues we're facing is how to have the AI detect that they can see the player.  

 

The reason this is difficult is because our levels have the concept of layers (ie: climb a ladder in the scene to get from the lower layer (1) to the upper layer of the scene (2).  This factors into gameplay because we want the AI to be able to see players that are beneath them, unless the players are against the wall or close enough to the wall that the AI shouldn't be able to see them.  See the attached image for an example, red is the AI, green is the player, and yellow is the detectable range that the AI can see.  Note how the ground cuts off the AI's frustum.

 

The issue is that half the team wants to push for using a 3D collision system in our 2D game.  Meaning everything in the game will have a 3D collision shape in this 2D game where the height is nonvariable except for when you're on different layers.

 

This seems wasteful to me and I'm trying to think of a way that this system can be implemented simply using 2D AABB's and basic trigonometry.  I'm trying to come up with a solution for this in 2D and wanted to see if anyone else had some thoughts.  Its safe to assume  that we know the height difference between the ai and the player, the distance of the AI from the edge, and the distance of the player from the bottom of the cliff.  The map is tiled and it would be fairly trivial to map the player/ai to the nearest tile (the sacrifice of accuracy isn't an issue) so its possible that there's some use in mapping the ai and player to a tile and figuring out visible tiles from that info.

 

This may be the one problem that we can't solve in 2D and that requires us to go with a 3D system for checking, but I'd really like to avoid that if at all possible.  

 

If I can clear anything up or give more detail please feel free to let me know.

 

 


PARTNERS