Jump to content

  • Log In with Google      Sign In   
  • Create Account

Husbjörn

Member Since 27 Jan 2014
Offline Last Active Jun 26 2016 01:04 PM

#5294924 SampleLevel not honouring integer texel offset

Posted by Husbjörn on 04 June 2016 - 04:59 AM

This is a curious one.

I recently had to switch out some SampleCmpLevelZero calls on a Texture2DArray for use with shader model 4. The go-to solution seemed to be SampleLevel, called as such:

Tex2DArray.SampleLevel(LegacySampler, coord, 0, texelOffset);

The MSDN article on the function states that the offset argument should indeed be supported: https://msdn.microsoft.com/en-us/library/windows/desktop/bb509699(v=vs.85).aspx

Once compiled it has absolutely no effect however; indeed it doesn't even give off a compilation error if set outside the valid range of (-8 .. +7).

Furthermore, the assembly instruction associated with SampleLevel, sample_l, doesn't seem to have any offset argument at all: https://msdn.microsoft.com/en-us/library/windows/desktop/hh447229(v=vs.85).aspx

It is of course possible that the offset is merged with the input texcoord through separate instructions prior to calling sample_l, but I can't seem to find any indication of this looking at my disassembled shader (granted I'm far from experienced with that though).

 

So I guess my question is whether this function indeed does not support the integer texel offset argument, with the compiler (version 43) essentially just silently removing it, or if something is going particularly wrong on my end? Perhaps this is a known bug in my particular HLSL compiler dll?

 

I know I can just go the extra mile and compute the corresponding texcoord offsets myself and I suppose I will, but I still found this intriguing enough to make a post about it since there doesn't seem to be any information regarding it that I've been able to find.




#5294282 False negative result in bounding frustum test

Posted by Husbjörn on 31 May 2016 - 03:01 AM

Maybe you cna share your algorithm for frustum culling? Maybe I am missing something here (some space translations or something else?

Certainly, though it is quite old and could probably do with a revision.

I'm using axis aligned bounding boxes by the way, storing the min and max extents along the X/Y/Z axes in object space:

void BoundingBox::Transform(const XMMATRIX& mat, XMFLOAT3& vecMin, XMFLOAT3& vecMax) const {
	XMFLOAT3 coord[8];
	// Front Vertices
	XMStoreFloat3(&coord[0], XMVector3TransformCoord(XMVectorSet(vecBaseMin.x, vecBaseMin.y, vecBaseMin.z, 1.0f), mat));
	XMStoreFloat3(&coord[1], XMVector3TransformCoord(XMVectorSet(vecBaseMin.x, vecBaseMax.y, vecBaseMin.z, 1.0f), mat));
	XMStoreFloat3(&coord[2], XMVector3TransformCoord(XMVectorSet(vecBaseMax.x, vecBaseMax.y, vecBaseMin.z, 1.0f), mat));
	XMStoreFloat3(&coord[3], XMVector3TransformCoord(XMVectorSet(vecBaseMax.x, vecBaseMin.y, vecBaseMin.z, 1.0f), mat));
	// Back Vertices
	XMStoreFloat3(&coord[4], XMVector3TransformCoord(XMVectorSet(vecBaseMin.x, vecBaseMin.y, vecBaseMax.z, 1.0f), mat));
	XMStoreFloat3(&coord[5], XMVector3TransformCoord(XMVectorSet(vecBaseMax.x, vecBaseMin.y, vecBaseMax.z, 1.0f), mat));
	XMStoreFloat3(&coord[6], XMVector3TransformCoord(XMVectorSet(vecBaseMax.x, vecBaseMax.y, vecBaseMax.z, 1.0f), mat));
	XMStoreFloat3(&coord[7], XMVector3TransformCoord(XMVectorSet(vecBaseMin.x, vecBaseMax.y, vecBaseMax.z, 1.0f), mat));
}
void Camera::ReconstructFrustumPlanes() {
	// Left Frustum Plane
        // Add first column of the matrix to the fourth column
	frustumPlane[0].a = viewProj._14 + viewProj._11; 
	frustumPlane[0].b = viewProj._24 + viewProj._21;
	frustumPlane[0].c = viewProj._34 + viewProj._31;
	frustumPlane[0].d = viewProj._44 + viewProj._41;

	// Right frustum Plane
        // Subtract first column of matrix from the fourth column
	frustumPlane[1].a = viewProj._14 - viewProj._11; 
	frustumPlane[1].b = viewProj._24 - viewProj._21;
	frustumPlane[1].c = viewProj._34 - viewProj._31;
	frustumPlane[1].d = viewProj._44 - viewProj._41;

	// Top frustum Plane
        // Subtract second column of matrix from the fourth column
	frustumPlane[2].a = viewProj._14 - viewProj._12; 
	frustumPlane[2].b = viewProj._24 - viewProj._22;
	frustumPlane[2].c = viewProj._34 - viewProj._32;
	frustumPlane[2].d = viewProj._44 - viewProj._42;

	// Bottom frustum Plane
        // Add second column of the matrix to the fourth column
	frustumPlane[3].a = viewProj._14 + viewProj._12;
	frustumPlane[3].b = viewProj._24 + viewProj._22;
	frustumPlane[3].c = viewProj._34 + viewProj._32;
	frustumPlane[3].d = viewProj._44 + viewProj._42;

	// Near frustum Plane
        // We could add the third column to the fourth column to get the near plane,
        // but we don't have to do this because the third column IS the near plane
	frustumPlane[4].a = viewProj._13;
	frustumPlane[4].b = viewProj._23;
	frustumPlane[4].c = viewProj._33;
	frustumPlane[4].d = viewProj._43;

	// Far frustum Plane
        // Subtract third column of matrix from the fourth column
	frustumPlane[5].a = viewProj._14 - viewProj._13; 
	frustumPlane[5].b = viewProj._24 - viewProj._23;
	frustumPlane[5].c = viewProj._34 - viewProj._33;
	frustumPlane[5].d = viewProj._44 - viewProj._43;


	// Normalize planes
	for(unsigned int p = 0; p < 6; p++) {
		float length = sqrt(
                    (frustumPlane[p].a * frustumPlane[p].a) + 
                    (frustumPlane[p].b * frustumPlane[p].b) + 
                    (frustumPlane[p].c * frustumPlane[p].c)
                );
		frustumPlane[p].a /= length;
		frustumPlane[p].b /= length;
		frustumPlane[p].c /= length;
		frustumPlane[p].d /= length;
	}
}
bool Camera::FrustumCullBoundingBox(const XMFLOAT3 &vecMin, const XMFLOAT3& vecMax) {
	for(unsigned int p = 0; p < 6; p++) {
		if(XMlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMin.x, vecMin.y, vecMin.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMax.x, vecMin.y, vecMin.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMin.x, vecMax.y, vecMin.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMin.x, vecMin.y, vecMax.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMax.x, vecMax.y, vecMin.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMax.x, vecMin.y, vecMax.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMin.x, vecMax.y, vecMax.z, 1)) >= 0.0f)
			continue;
		if(XMPlaneDotCoord(&frustumPlane[p], &XMVectorSet(vecMax.x, vecMax.y, vecMax.z, 1)) >= 0.0f)
			continue;
		return false;
	}
	return true;
}



#5293359 DX9 Doubling Memory Usage in x64

Posted by Husbjörn on 25 May 2016 - 08:33 AM

Adding to what vstrakh suggested, your VERTEX structure could be responsible if it uses some datatype that expands, such as 32-bit floats on x86 to 64-bit floats on x64. If you have big vertex buffers that would potentially double their total size then.




#5290530 Is SetPrivateData only supposed to be called once?

Posted by Husbjörn on 07 May 2016 - 04:38 AM

OK, rubber ducking in all its glory, I found out that if you call SetPrivateData(<guid>, 0, null) prior to "overwriting" it this apparently releases the previous data and circumvents the warning.

I still believe this is what it must be doing behind the scenes unless the new data fits in the previous allocation though...? So the warning still seems a bit superfluous.




#5272735 Rendering Multiple cubes

Posted by Husbjörn on 26 January 2016 - 11:02 AM

If the smaller cube depth tests against the values already written to the depth-stencil buffer by the larger, its pixels will be found to be covered and discarded.

Try to set the DepthEnable member of your depth-stencil description to false to disable this and see if that makes it draw as intended, since it appears that you specify the desired drawing order yourself.




#5271390 Shimmering / offset issues with shadow mapping

Posted by Husbjörn on 16 January 2016 - 05:47 AM

Yes, they should be; I only used those to try to correspond as much as possible to the original code which I assumed would work.

Changing to using the XM functions instead actually makes the discrepancies when moving the camera forward / backward slightly smaller (no noticeable change when moving left-to-right), however it's still a far cry from being at acceptable levels, and the camera rotation is just as severe with either method so there must be something else at play.

Thanks for the suggestion though, I guess the XM transform versions are somehow a bit more accurate.

 

Update: I have noticed that the shimmering only occurs while changing the orientation / translation of the rendering camera. Moving the light source causes some edge shimmering but much less so than when the camera moves.

Now the thing about these camera movements is that as soon as the camera stops moving / rotating, the shadows stabilize on the next frame. In other words, if the shadow project in a certain way in frame 1, then the camera is moved over frames 2 - 10 and is again still at frame 11, the shadows will project to the same texels in frames 1 and 11, while flickering over frames 2 - 10. I'm starting to wonder if this could somehow be related to double buffering in that the shadow map is rendered with the "latest" view-projection matrix, but the shader lags one frame behind in using the matrix from the previous frame?
But it doesn't seem to make much sense if this would happen automatically; shouldn't the shader use the latest cbuffer values as well as the depth map resource rather than some old buffered copy of these? Or could something else cause this kind of behaviour?




#5271339 Communication between shaders

Posted by Husbjörn on 15 January 2016 - 03:06 PM

You can (and probably have to) tell your geometry shader the vertex layout struct of your output stream; going by your example it would look like so:

void GS(triangle VS_OUT input[3], inout TriangleStream<GS_OUT> Tristream);

As for getting the system value, since this is an input to the shader stage itself rather than coming from the input vertices, I believe it should be declared as its own input to the geometry shader function:

void GS(triangle VS_OUT input[3], uint gsId : SV_GSInstanceId, inout TriangleStream<GS_OUT> Tristream);

This works for other types of shader / system-values as well:

hs_out HS(InputPatch<vs_out, 6> p, uint i : SV_OutputControlPointID, uint patchId : SV_PrimitiveID);

For a shader type that takes its input as the direct output of the previous stage, for example vertex to pixel shaders, you should be able to declare the system value semantic as part of the output from your vertex shader and simply not set it from your vertex shader. The above approach of adding it as another input argument should also work however, and probably looks cleaner.




#5271276 Shimmering / offset issues with shadow mapping

Posted by Husbjörn on 15 January 2016 - 08:08 AM

I am in the somewhat early stages of adding shadow mapping for directional light sources to my engine and after having some issues with this I decided to follow the step-by-step implementation offered by Alex Tardif here: http://alextardif.com/ShadowMapping.html

Still so I'm getting about the same shimmering edges, as well as seeming frame-to-frame offsets in the whole depth map when moving the viewing camera.

Here's a short video to show the issues in action: https://youtu.be/_XBx6UmUZdI

 

What strikes me as particularly odd is the fact that there are significantly less artifacts when moving the camera left-to-right, as opposed to forward / backwards. As is more understandable the most severe artifacts occur when changing the orientation of the camera.

 

Here's my relevant code if anybody can spot any obvious issues or things I've missed that may be considered universally as "obvious".

The code is intentionally as close a match to the one presented in Tardif's article as possible, even if this makes it a bit messier with the changes between XMFLOATX and XMVECTOR etc. I have also left out the second part of his article (or rather not gotten to it yet) which deals with downsampling and blurring, but I cannot see how this would have any relevance besides making the shadows appear smoother.

I am also only using a single cascade split for now, which arbitrarily spans the 1..400 depth range of the rendering camera (which roughly correponds to the size of my testing scene):

// Create a new projection matrix for the rendering camera (which is assumed to use perspective projection here) that 
// only stretches over the current cascade
XMMATRIX matCascadeProjection = XMMatrixPerspectiveFovLH(pRenderingCamera->GetFOV(), pRenderingCamera->GetAspect(), 1.0f, 300.0f);
XMVECTOR frustumCorners[8] = {
	XMVectorSet(-1.0f, 1.0f,  0.0f, 0.0f), 
	XMVectorSet(1.0f,  1.0f,  0.0f, 0.0f), 
	XMVectorSet(1.0f,  -1.0f, 0.0f, 0.0f), 
	XMVectorSet(-1.0f, -1.0f, 0.0f, 0.0f), 
	XMVectorSet(-1.0f, 1.0f,  1.0f, 0.0f), 
	XMVectorSet(1.0f,  1.0f,  1.0f, 0.0f), 
	XMVectorSet(1.0f,  -1.0f, 1.0f, 0.0f), 
	XMVectorSet(-1.0f, -1.0f, 1.0f, 0.0f)
};
// NOTE: The transpose part here seems rather useless; the tutorial mentions it is for being sent to the GPU, but this
//       particular matrix never is. Nevertheless, I'll do it like this to achieve the highest possible correspondence to the
//       article's code snippets. Furthermore, not using a transposed matrix (and obviously not using the TransformTransposed 
//       function) seems to give identical results. Try to remove the transpose part once everything seems to work as intended.
XMMATRIX matCamViewProj		= XMMatrixTranspose(pRenderingCamera->GetViewMatrix() * matCascadeProjection);
XMMATRIX matInvCamViewProj	= XMMatrixInverse(nullptr, matCamViewProj);
// Unproject frustum corners into world space
for(size_t n = 0; n < 8; n++) {
	XMFLOAT3 tmp;
	XMStoreFloat3(&tmp, frustumCorners[n]);
	tmp = util::TransformTransposedFloat3(tmp, matInvCamViewProj);
	frustumCorners[n] = XMLoadFloat3(&tmp);
}

// Find frustum center
XMFLOAT3 frustumCenter(0.0f, 0.0f, 0.0f);
{
	XMVECTOR v = XMLoadFloat3(&frustumCenter);
	for(size_t n = 0; n < 8; n++)
		v += frustumCorners[n];
	v *= (1.0f / 8.0f);
	XMStoreFloat3(&frustumCenter, v);
}

// Retrieve normalized light direction
XMVECTOR lightDirection = XMVector3Normalize(light->GetTransform().GetForwardVector());

// Determine the radius of the to-be orthographic projection as the distance between the farthest frustum corner points divided by two
float radius = XMVectorGetX(XMVector3Length((frustumCorners[0] - frustumCorners[6]))) / 2.0f;	// The length is copied into each element

// Figure out how many texels per world-unit will fit if we project a cube with the given "radius" (side length / 2)
float texelsPerUnit = (float)shadowMapWidth / (radius * 2);	// NOTE: The shadow map *must* be square!

// Build a scaling matrix to scale evenly in all directions to the number of texels per unit
XMMATRIX matScaling = XMMatrixScaling(texelsPerUnit, texelsPerUnit, texelsPerUnit);

// Create look-at vector and matrix by accounting for scaling (and later snapping) to the number of texels per unit
const XMVECTOR UpVector		= XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
const XMVECTOR ZeroVector	= XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR vecBaseLookat		= XMVectorSet(-XMVectorGetX(lightDirection), - XMVectorGetY(lightDirection), -XMVectorGetZ(lightDirection), 0.0f);
XMMATRIX matLookat		= XMMatrixMultiply(XMMatrixLookAtLH(ZeroVector, vecBaseLookat, UpVector), matScaling);
XMMATRIX matInvLookat		= XMMatrixInverse(nullptr, matLookat);	// Take note that this will also undo the scaling effect imposed on «matLookat»!

// Now the above can be used to move the frustum center in texel-sized increments (when transformed by matLookat, and the 
// result can then be brought back into world-space by the inverse lookat matrix).
frustumCenter	= util::TransformFloat3(frustumCenter, matLookat);
frustumCenter.x	= (float)floor(frustumCenter.x);	// Clamp to texel increment (by rounding down)
frustumCenter.y = (float)floor(frustumCenter.y);	// Clamp to texel increment (by rounding down)
frustumCenter	= util::TransformFloat3(frustumCenter, matInvLookat);

// Calculate eye position by backtracking in the opposite light direction, ie. towards the light, by the cascade radius * 2
XMVECTOR eye = XMLoadFloat3(&frustumCenter) - (lightDirection * radius * 2.0f);

// Build the final light view matrix
XMMATRIX matLightView = XMMatrixLookAtLH(eye, XMLoadFloat3(&frustumCenter), UpVector);

// Build the light's projection matrix. This is intended to keep a consistent size and should therefore minimize
// shimmering edges due to per-frame matrix recalculations.
// The near- and far value multiplications are arbitrary and meant to catch shadow casters outside of the frustum, 
// whose shadows may extend into it. These should probably be better tweaked later on, but lets see if it at all works first.
const float zMod = 6.0f;
XMMATRIX matLightProj = XMMatrixOrthographicOffCenterLH(-radius, radius, -radius, radius, -radius * zMod, radius * zMod);


// Associate the current matrices with the light source (the shader side will need to know the "light matrix" to properly sample the shadow map(s))
light->SetCascadeViewProjectionMatrix(split, matLightView * matLightProj);

// Bind the corresponding shadow map for rendering by the special, global shadow mapping camera
gGlob.pShadowCamera->SetDepthStencilBuffer(shadowMap, (UINT)light->GetShadowMapIndex() + split);
// Set the shadow camera's matrices to those of the light source
gGlob.pShadowCamera->SetViewMatrix(matLightView);
gGlob.pShadowCamera->SetProjectionMatrix(matLightProj);

// Render the depth (shadow) map (note that this automatically clears the associated depth-stencil view)
gGlob.pShadowCamera->RenderDepthOnly(true);

The implementations of util::TransformFloat3 and util::TransformTransposedFloat3 are direct copies, with the change that they use the XMFLOAT3 struct instead of Vector3, of the implementations given by Tardif here: http://alextardif.com/code/transformvector3.txt

For completedness, here's my code for those as well:

inline XMFLOAT3 TransformFloat3(const XMFLOAT3& point, const XMMATRIX& matrix) {
	XMFLOAT3 result;
	XMFLOAT4 temp(point.x, point.y, point.z, 1);	// Need a 4-part vector in order to multiply by a 4x4 matrix
	XMFLOAT4 temp2;

	temp2.x = temp.x * matrix._11 + temp.y * matrix._21 + temp.z * matrix._31 + temp.w * matrix._41;
	temp2.y = temp.x * matrix._12 + temp.y * matrix._22 + temp.z * matrix._32 + temp.w * matrix._42;
	temp2.z = temp.x * matrix._13 + temp.y * matrix._23 + temp.z * matrix._33 + temp.w * matrix._43;
	temp2.w = temp.x * matrix._14 + temp.y * matrix._24 + temp.z * matrix._34 + temp.w * matrix._44;

	result.x = temp2.x / temp2.w;			// View projection matrices make use of the W component
	result.y = temp2.y / temp2.w;
	result.z = temp2.z / temp2.w;

	return result;
}

inline XMFLOAT3 TransformTransposedFloat3(const XMFLOAT3& point, const XMMATRIX& matrix) {
	XMFLOAT3 result;
	XMFLOAT4 temp(point.x, point.y, point.z, 1);	// Need a 4-part vector in order to multiply by a 4x4 matrix
	XMFLOAT4 temp2;

	temp2.x = temp.x * matrix._11 + temp.y * matrix._12 + temp.z * matrix._13 + temp.w * matrix._14;
	temp2.y = temp.x * matrix._21 + temp.y * matrix._22 + temp.z * matrix._23 + temp.w * matrix._24;
	temp2.z = temp.x * matrix._31 + temp.y * matrix._32 + temp.z * matrix._33 + temp.w * matrix._34;
	temp2.w = temp.x * matrix._41 + temp.y * matrix._42 + temp.z * matrix._43 + temp.w * matrix._44;

	result.x = temp2.x / temp2.w;			// View projection matrices make use of the W component
	result.y = temp2.y / temp2.w;
	result.z = temp2.z / temp2.w;

	return result;
}

Any light-shedding on what may be at fault here would be most welcome.

I can also provide my HLSL code if requested but I don't see how that can really be at fault since it doesn't do any offsetting or such and the shadow are after all projected where they should, were it not for the jumping around between frames. So I believe the fault should be with the depth (shadow) map rendering as outlined above.

The depth map is a slice in a Texture2DArray, 2048x2048 pixels in size and uses the DXGI_FORMAT_D32_FLOAT format. Naturally it has only a single mip slice.




#5256930 How to create a depth-stencil-only pixel shader?

Posted by Husbjörn on 12 October 2015 - 04:22 PM

Right, it turns out I accidentally had set a normal rendering shader to a billboard instead of this depth one, so that was indeed trying to write to SV_Target. How embarrasing. On the upside, no more warnings now that I have fixed that oversight!

Thanks a lot for pointing it out ajmiles and Matias Goldberg :)




#5227275 Inconsistent falloff for light sources

Posted by Husbjörn on 05 May 2015 - 04:30 AM

I've been trying to implement lighting in my project, essentially following the method that is described here: http://www.3dgep.com/texturing-lighting-directx-11

 

While this does seemingly work, I've noticed some discrepancies where it essentially appears that certain meshes receive more light than they should based on their distance from the light source(s).

Here's an image showcasing this issue with a spot light:

SpotLightProblem.png

The light source is situated a small distance above the floor and is aimed straight along the Z axis. As can be seen the light fall off such that it is essentially non-inifluential at the floor where the wall is, yet the wall receives a lot more light than I feel it should at this distance.

 

Still, one may think that this could possibly be related to the fact that the imaginary light rays hit the wall straight on as opposed to the floor and as such would light it up more. However, the same problem is also evident with omnidirectional light sources which in this case should have the same effect on all surfaces it hits (including the floor). As can be seen in the following image this is not the case however.

The dark circle is a semi-transparent sphere indicating the position of the light source and not a shadow of the sphere above it by the way; no shadow mapping has been implemented yet.

 

PointLightProblem.png

 

 

I am wondering what might be the cause of this.

All walls / floor / ceiling are just instances of the same quad mesh repositioned and rotated using world matrices so I don't believe it would relate to one (read: all) of these having incorrect normal data or such.

Also when only rendering the calculated attenuation (see the link above), it does indeed fall off as it "should" from the light source and opposite walls do not get a greater influence so the problem shouldn't be related to the attenuation factor either.

 

From what I can gather, it would in fact seem the problem relates to this function, since only returning C from it also causes proper falloff:

/**
 * Calculates the diffuse contribution of a light source at the given pixel.
 * @param L		- Light direction in world space
 * @param N		- Normal direction of the pixel being rendered in world space
 * @param C		- Light colour
 **/
float4 CalculateDiffuseContrib(float3 L, float3 N, float4 C) {
	float NL = max(0, dot(N, L));
	return C * NL;
}

Since the only real factor here is the fragment normal (the light direction should be correct and is calculated as normalize(lightPos - fragmentPos), where both are in world space) I imagine the problem would have to be somehow related to that.

I am using normal mapping in the above screenshots, but even when using interpolated vertex normals the results are the same.

 

Is this a common occurence, or may it even be that this is how it is supposed to look?

Thanks in advance for any illuminating (wink.png) replies!




#5191574 Getting around non-connected vertex gaps in hardware tessellation displacemen...

Posted by Husbjörn on 06 November 2014 - 03:43 PM

Off the top of my head each individual face of your cube is tessellated and then displaced.

You need to ensure that the edge vertices are shared in each (subdivided) side-face or else these seams will occur since all vertices on the top face are displaced only along the up axis and all vertices of the front face are displaced only along the depth axis.

A simple solution is to displace along the vertex normals and ensure that whereever you have overlapping vertices (such as at the corners of a cube) you set the normal of all such vertices to the average of all "actual" vertex normals at that position. This will make the edges a bit more bulky but keep the faces connected.

 

My previous post in this thread (just above yours) describes how I solved this in a relatively simple way in more detail.




#5178123 Seemingly incorrect buffer data used with indirect draw calls

Posted by Husbjörn on 04 September 2014 - 01:13 PM


Like any other GPU-executed command, CopyStructureCount has implicit synchronization with any commands issued afterwards. So there shouldn't be any kind of manual waiting or synchronization required, the driver is supposed to handle it.

That's what I thought.

 

After a third rewrite (and a full rendering shader rewrite as well) it turned out I managed to build my quads the wrong way in the geometry shader so that they weren't visible; the appropriate vertex count does indeed seem to be passed to the DrawInstancedIndirect call. However, RenderDoc is still reporting the call as having a zero argument for the vertex count, so I guess there's a quite sneaky bug in there too which threw me off (naturally I expected it to give the correct output).

Thanks for your suggestions though smile.png

 

 

Edit: Didn't see your ninja post baldurk.

 

 

 

To clarify - the number that you see in the DrawInstancedIndirect(<X, Y>) in the event browser in RenderDoc is just retrieved the same way as you described by copying the given buffer to a staging buffer, and mapping that.

That is indeed weird because now I do get the proper count read back if I map it to a staging buffer myself and the correct draw results, yet RenderDoc claims this function is called with the arguments <0, 1>. I guess it clips away the last two offset integers because in reality the buffer should contain 4 values (mine would be x, 1, 0, 0) right?

My byte offset is zero, there is nothing more in the indirect buffer than the 16 bytes representing the argument list.

 

I'll try to add to my currently working minimalistic program to see if it still renders correctly and whether RenderDoc will keep on showing that 0 (or something else that's unreasonable) and get back. Maybe the problems will resurface in a different way once I add some complexity back in, though I hope not.




#5172779 Unordered access view woes with non-structured buffers

Posted by Husbjörn on 11 August 2014 - 08:05 AM

RWBuffer<float3> RWBuf : register(u0);

But it fails at the call to ID3D11Device::CreateUnorderedAccessView so I don't think the shader declaration is of any relevance since they haven't been bound together yet by then.




#5172584 Rendering blended stuff before skybox?

Posted by Husbjörn on 10 August 2014 - 05:50 AM

I would draw the skybox first since it should always be furthest in the background anyway and you should sort your opaque objects from back-to-front.

If you draw the skybox last, your transparent objects will only blend with opaque and other, previously drawn transparent ones, but not the skybox. This means they will get an edge around the transparent parts of the colour it was blended with which will be the render target clear colour in areas where only the skybox would be behind these objects. Of course this won't look pretty once the sky gets filled in to the non-transparent pixels surrounding those blended areas ;)




#5163027 Getting around non-connected vertex gaps in hardware tessellation displacemen...

Posted by Husbjörn on 26 June 2014 - 10:10 AM

Sorry for the long title, couldn't figure out how to express it shorter without being overly ambigious as to what this post is about.

 

Anyway, I've been poking around with displacement maping using the hardware tessellation features of DX11 for getting some more vertices to actually displace the last few days, for no particular reason other than to try it out so I'm not really looking for other ways to solve some specific problem.

Displacing a sphere or some other surface with completely connected faces work out as intended but issues obviously occur where there are multiple vertices with the same position but different normals (these vertices then get displaced in different directions and thus become disconnected => gaps appear in the geometry). I tried to mock up some simple solution to this by finding out which vertices share positions in my meshes and then setting a flag for these to tell my domain shader to not displace those vertices at all; it wouldn't be overly pretty but at least the mesh should be gapless and it hopefully wouldn't be too noticeable I reasoned. Of course this didn't work out very well (the whole subdivision patches generated from such overlapping vertices had their displacement factors set to 0 creating quite obvious, large frames around right angles and such). What I'm wondering is basically if this is a reasonable approach to try to refine further or if there are other ways to go about it that may be better? The only article on the topic I've managed to find mostly went on about the exquisitness of Bezier curves but didn't really seem to come to any conclusions (although maybe those would've been obvious to anyone having the required math skills).

Thankful for any pointers on this, the more I try to force this, the more it feels like I'm probably missing something.

 

As for my implementation of the tessellation, I've mostly based it around what is described in chapter 18.7 and 18.8 of Introduction to 3D Game Programming With DirectX 11 (http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228).






PARTNERS