Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 17 Jul 2012
Offline Last Active Mar 09 2013 04:35 PM

Topics I've Started

Writing to the Stencil Buffer

06 February 2013 - 09:57 AM

Hi guys,


is there any way in standard D3D11 to write values directly to the stencil buffer, for use in another pass? What I want to do is:


  • Have a screen-space shader that reads a bunch of inputs and generates an 8-bit texture.
  • Copy this 8-bit texture into a depth-stencil texture in the stencil channel.
  • Use that depth-stencil to mask further drawing.


I can think of a couple different ways of achieving the same goal without doing this but they all require excessive bandwidth, e.g:


  • Run a screen-space pass for each stencil value (0-255), reading inputs and discarding pixels that don't match with a clip.
  • Output a single, fixed stencil value for each pass.
  • Use this as the mask.


On each of those 255 passes I'm going to be reading the inputs repeatedly and that's not great.


I know D3D11 doesn't have an equivalent of ARB_shader_stencil_export so I was hoping there'd be some way of using typeless formats to achieve this, but so far my testing hasn't revealed anything (e.g. can't bind typeless formats as render targets).

D3D11 Feature Level 10.0 on Intel HD Laptop

20 October 2012 - 06:23 AM


I have the strangest of problems - I'm trying to run my game on my laptop and nothing is rendering. My setup is:

Intel HD Graphics on Toshiba laptop, latest drivers installed
Windows 7 64-bit
x86 target
Uses D3D11 API but 10.0 feature level.

Basically, as far as I can tell from all 3 debug tools I'm using, everything I'm doing is correct. PIX provides the most detail. I get lots of vertices going into the Vertex Shader correctly which I can inspect in the PreVS tab. The PostVS shows me garbage output data so something is clearly going wrong. Intel's GPA and AMD's GPUPerf tools are telling me a similar story but in much less detail.

However, if I step-debug the vertex shader, all the input data and shader constants are correctly setup and my output structure is correctly assembled! What's even stranger is that running this through the WARP device makes everything render correctly.

I'm highly suspicious about this being a driver bug but there's every chance I'm doing something odd myself. Has anybody seen anything like this before and can give any pointers?

[SOLVED] Dual Quaternion Skinning failing on blend

17 July 2012 - 02:09 PM


I've implemented dual quaternion skinning and for some reason it just doesn't work. I'm hoping somebody will have ideas on further avenues for trying to fix this. To start with, I'm calculating all my bone transforms with quaternion rotations and position vectors. I've verified this works by converting the final result to a matrix and doing basic linear skinning.

My quat/vec to dual quaternion conversion function is:

void iUQTtoUDQ(math::quat* dual, const math::quat& q, const math::vec3& p)
   // Straight copy of rotation
   dual[0].x = q.x;
   dual[0].y = q.y;
   dual[0].z = q.z;
   dual[0].w = q.w;

   // Multiply rotation by pure quaternion position and scale by 0.5 (dual scalar)

   dual[1].x = 0.5f * ( p.x * q.w + p.y * q.z - p.z * q.y );
   dual[1].y = 0.5f * (-p.x * q.z + p.y * q.w + p.z * q.x );
   dual[1].z = 0.5f * ( p.x * q.y - p.y * q.x + p.z * q.w );
   dual[1].w = 0.5f * (-p.x * q.x - p.y * q.y - p.z * q.z );

This matches my quaternion multiplication function with w set to zero for converting a vector into a pure quaternion.

This is my skinning function, with debug code left in-tact:

float3 SkinPositionDualQuat(float4 pos, uint4 bone_indices, float4 bone_weights)
   float2x4 dq0 = GetBoneDualQuat(bone_indices.x);
   float2x4 dq1 = GetBoneDualQuat(bone_indices.y);
   float2x4 dq2 = GetBoneDualQuat(bone_indices.z);
   float2x4 dq3 = GetBoneDualQuat(bone_indices.w);

   // DEBUG: Here I'm rescaling weights to test weighting by 1,2,3 and 4 bones at a time
   // Weights are sorted, largest influence first

   //bone_weights.y = 0;
   bone_weights.z = 0;
   bone_weights.w = 0;
   float t = dot(bone_weights, 1);
   bone_weights /= t;

   // Antipodality checks

   if (dot(dq0[0], dq1[0]) < 0.0) bone_weights.y *= -1.0;
   if (dot(dq0[0], dq2[0]) < 0.0) bone_weights.z *= -1.0;
   if (dot(dq0[0], dq3[0]) < 0.0) bone_weights.w *= -1.0;

   // Weight dual quaternions
   float2x4 result =
	 bone_weights.x * dq0 +
	 bone_weights.y * dq1 +
	 bone_weights.z * dq2 +
	 bone_weights.w * dq3;

   // Normalise the result and transform

   float normDQ = length(result[0]);
   result /= normDQ;
   return DQTransformPoint( result[0], result[1], pos.xyz );

Now for the interesting bit of code:

float3 DQTransformPoint( float4 realDQ, float4 dualDQ, float3 pt )
   return pt + 2 * cross(realDQ.xyz, cross(realDQ.xyz, pt) - realDQ.w * pt)
		+ 2 * (realDQ.w * dualDQ.xyz - dualDQ.w * realDQ.xyz + cross(realDQ.xyz, dualDQ.xyz));

This matches the original source from Kavan, except that I've swapped the last sign on the first line (-realDQ.w*pt used to be +realDW.w*pt). If I don't do this, it doesn't work.

So to the problem: If I weight by one bone only, this all works perfectly. However, when my character bends their arm up to their shoulder, the entire arm skews like crazy.

I can't believe that the linear interpolation of dual quats above is the source of the problem (unless dual quats are a modern version of the emperors new clothes). I'm guessing it's something to do with the fact that I've changed DQTransformPoint to work in my engine but haven't changed the encoding. Something is amiss!

Can anybody think of anywhere I can go with this?