Jump to content

  • Log In with Google      Sign In   
  • Create Account


FriendlyFire

Member Since 18 Apr 2012
Offline Last Active Dec 04 2012 11:50 AM

Topics I've Started

D3D9 Position Reconstruction Shaking

01 December 2012 - 04:42 PM

So I'm using a fairly traditional method for depth reconstruction through a Vertex/Pixel Shader. First, here's the relevant code:

VS_FARCLIPVEC VS_RenderScreen( float4 vPos : POSITION,
						 float2 vTexCoord0 : TEXCOORD0)
{  
VS_FARCLIPVEC OUT;
OUT.Position = vPos;
float fTexX = (vTexCoord0.x * 2 - 1);
float fTexY = ((1- vTexCoord0.y) * 2 - 1);
float4 fProjPoint = float4(fTexX,fTexY,1,1);
float4 fFarClipVector = mul(fProjPoint, g_mProjectionInverse);
fFarClipVector.xyz /= fFarClipVector.w;
OUT.vTexCoord0 = vTexCoord0;
OUT.fFarClipVector = fFarClipVector;

float4 fLightPosVS = mul(float4(g_singleLightPos,1), g_mView);

OUT.fLightPosVS = fLightPosVS.xyz / fLightPosVS.w;
return OUT;
}

float4 PS_CreateShadowBufferPlanets( VS_FARCLIPVEC IN ) : COLOR0
{
float fDepth = tex2D(Tex0Sampler, IN.vTexCoord0).r;
if(fDepth == 1.0)
  discard;
 
float3 vPosVS = IN.fFarClipVector.xyz * fDepth;

float dist = length(vPosVS - g_vLightPosVS.xyz);

for(int i = 0; i < g_iPlanetCount; i++)
{
  float3 p1 = vPosVS;
  float3 p2 = g_vLightPosVS.xyz;
  float3 p3 = g_vPlanetPositions[i].xyz;
  float r = g_fPlanetRadii[i];
 
  float3 d = normalize(p2 - p1);
  float a = dot(d, d);
  float b = 2.0 * dot(d, p1 - p3);
  float c = dot(p3, p3) + dot(p1, p1) - 2.0 * dot(p3, p1) - r*r;
 
  float discriminant = b*b - 4*a*c;
  if(discriminant >= 0)
  {
   float smoothing = sqrt(discriminant) / (4*a);
   float dist2 = -b / (4*a) - smoothing;
   if(dist2 > 0 && dist2 < dist)
    return smoothing / r * 7;
  }
}

discard;
return 0;
}

This whole code is used to render planet shadows regardless of distance and without using shadow maps. Performance isn't ideal, but I've selected this code because it has a minimal amount of inputs. This is largely working as expected, except for one thing: the shadows that are calculated shake, badly. Whenever I rotate the camera, even if just a little bit, the shadows tend to slide a bit in one direction, then snap back to their original location. Rinse and repeat as the camera rotates. If the camera moves, the effect still appears, but it takes a very, very large amount of movement before just one "snapping" happens.

If the camera doesn't move, the shadows are pixel-accurate and don't have any visible defect. The g_vPlanetPositions and g_vLightPosVS use world-space XYZ coords which are projected into the camera's view space prior to the shader running. I do not believe that those are problematic, because I also have another shadow algorithm (this one using cascade shadow maps) that does the same thing.

Therefore, I am suspecting one of two things: either my view matrix or my projection matrix is wrong. I do not know which would be the case. In order to calculate my inverse projection matrix, I just use D3DXMatrixInverse. The initial projection matrix is a simple D3DXMatrixPerspectiveFovLH call with a field of view between 70 and 90 degrees.

The view matrices are calculated from the camera position and a rotation matrix.

It must be said that the scale of the scene is fairly large. The distance between the light and the casters can be in the 500,000 units range, same for the receivers. I am uncertain whether the entire thing is caused by floating point precision issues, because my shadow maps have a much higher accuracy than the steps I notice and the rest of the scene renders fine.

Does anyone have pointers as for what could cause this? I can paste more code as necessary, just ask.

SMAA Implementation - Weights Pass

18 April 2012 - 09:00 PM

I've been working on a fairly complex project which involves rebuilding a graphics engine from DirectX 8 to DirectX 9 for the past few months. We've done great progress and added a full deferred renderer to the project. However, this obviously causes problems with antialiasing, so we've implemented FXAA 3.1 some time ago.

Lately, I've been thinking about switching over to SMAA 2.7. The results SMAA gets are just much better than FXAA's. I've integrated the code given by the team just fine, but the resulting data is wrong. SMAA's split into three pass: edge detection, blending weight calculations and finally neighbor blending. The first step seemingly works perfectly, but the second step doesn't produce the results I should get, and thus the final result is that the image isn't even antialiased.

As a side note, I'm using the precompiled DX10 application but I've taken most of the code and logic from the DX9 sample.

First of all, here's the code which sets up the postprocess:
if(HkData::iSetSMAA)
{
#ifdef DEBUG_PIPELINE
OutputDebugString("SMAA\n");
D3DPERF_BeginEvent(PIX_COLOR, L"SMAA");
#endif
  if(HkData::bBenchmark)
   timerBench.start();
  // save critical states
  IDirect3DVertexDeclaration9* oldVertexDecl;
  d3d9_realdevice->GetVertexDeclaration(&oldVertexDecl);
  IDirect3DVertexBuffer9* pVBOLD;
  uint iVBOLD_OFFSET, iVBOLD_STRIDE;
  d3d9_realdevice->GetStreamSource(0, &pVBOLD, &iVBOLD_OFFSET, &iVBOLD_STRIDE);
  d3d9_realdevice->SetVertexDeclaration( g_pVertDeclPP );
  d3d9_realdevice->SetStreamSource( 0, pVB, 0, sizeof( PPVERT ) );
  d3d9_realdevice->StretchRect(surfBackBuffer, 0, surfBackBufferTex, 0, D3DTEXF_NONE);
  // edge detection pass
  d3d9_realdevice->SetRenderTarget(0, surfSMAA_Edge);
  d3d9_realdevice->SetDepthStencilSurface(surfOldDepth);
  d3d9_realdevice->SetRenderState(D3DRS_STENCILENABLE, TRUE);
  d3d9_realdevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
  d3d9_realdevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_STENCIL, 0x00000000, 0.0f, 0 );
  g_pEffectSMAA->SetTexture("colorTex2D", texBackBuffer);
 
  g_pEffectSMAA->SetTechnique("LumaEdgeDetection");
  g_pEffectSMAA->Begin(&iPasses, 0);
  g_pEffectSMAA->BeginPass(0);
  d3d9_realdevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
  g_pEffectSMAA->EndPass();
  g_pEffectSMAA->End();
  // blending weight pass
  d3d9_realdevice->SetRenderTarget(0, surfSMAA_Blend);
  d3d9_realdevice->Clear( 0L, NULL, D3DCLEAR_TARGET,
		   0x00000000, 1.0f, 0L );
  g_pEffectSMAA->SetTexture("edgesTex2D", texSMAA_Edge);
  g_pEffectSMAA->SetTexture("areaTex2D", texSMAA_Area);
  g_pEffectSMAA->SetTexture("searchTex2D", texSMAA_Search);
 
  g_pEffectSMAA->SetTechnique("BlendWeightCalculation");
  g_pEffectSMAA->Begin(&iPasses, 0);
  g_pEffectSMAA->BeginPass(0);
  d3d9_realdevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
  g_pEffectSMAA->EndPass();
  g_pEffectSMAA->End();
  d3d9_realdevice->StretchRect(surfSMAA_Blend, 0, surfBackBuffer, 0, D3DTEXF_NONE);
  // final pass
  /*d3d9_realdevice->SetRenderState(D3DRS_STENCILENABLE, FALSE);
  d3d9_realdevice->SetRenderTarget(0, surfBackBuffer);
  g_pEffectSMAA->SetTexture("blendTex2D", texSMAA_Blend);
  g_pEffectSMAA->SetTexture("colorTex2D", texBackBuffer);
 
  g_pEffectSMAA->SetTechnique("NeighborhoodBlending");
  g_pEffectSMAA->Begin(&iPasses, 0);
  g_pEffectSMAA->BeginPass(0);
  SMAARenderQuad(surfdescBackBuffer.Width, surfdescBackBuffer.Height);
  g_pEffectSMAA->EndPass();
  g_pEffectSMAA->End();*/
  // restore stuff
  d3d9_realdevice->SetVertexDeclaration( oldVertexDecl );
  oldVertexDecl->Release();
  d3d9_realdevice->SetRenderState(D3DRS_STENCILENABLE, FALSE);
  d3d9_realdevice->SetStreamSource(0, pVBOLD, iVBOLD_OFFSET, iVBOLD_STRIDE);
  d3d9_realdevice->SetPixelShader(NULL);
  d3d9_realdevice->SetVertexShader(NULL);
  if(HkData::bBenchmark)
   bench_current.tAA = timerBench.stop();
#ifdef DEBUG_PIPELINE
  D3DPERF_EndEvent();
#endif
}

The shader code I'm using is taken bit-for-bit from the DX9 demo, so I won't copy the (lengthy) files here. Here's a link to the release I've been working with: https://github.com/iryoku/smaa/zipball/v2.7

I've debugged the application with maximum validation and output, but nothing in the log makes me believe the code is encountering errors. I've also done PIX runs, but I can't say I'm quite at the level necessary to understand the shader, so seeing what it does is slightly pointless. I have however taken pictures of each step (sorry for the large pictures but resizing kills all the details):

Initial image:
Posted Image

Edges:
Posted Image

Related edge stencil:
Posted Image

Weights:
Posted Image

Weights (alpha channel):
Posted Image

Demo program edges:
Posted Image

Demo program edge stencil:
Posted Image

Demo program weights:
Posted Image

Demo program weights (alpha):
Posted Image


Unless I'm missing something, it's pretty clear that my weights output is entirely different from what I'm supposed to be getting. I can't find a channel match which would indicate that the two versions use different channels for storing the data, either. I've double-checked and both surfaces I have are A8R8G8B8 like in the demo.

I'm quite stumped, but I have found very little information on how to implement SMAA outside of the main team's package. Most hits come back to the SMAA injector, which ironically works fine when wrapped around this (but it affects HUD elements and adds another layer of complexity, which is unacceptable). I can also say that I have modified the SMAA injector shader to get the weights image and it is extremely similar (if not identical) to the demo's, which makes me believe it's not a difference between DX9 and DX10 implementations, but a real problem with my code. However, considering the only inputs in the blend weights pass are the edge texture, the area texture and the search texture, all three of which appeared fine in PIX... I'm not sure what the error is.

Any help would be greatly appreciated.

PARTNERS