Directx ddx/ddy

Started by
12 comments, last by WizardOfOzzz 12 years, 11 months ago
Im trying to understand some of the results Im getting using the ddx and ddy functions. Image 1 shows the normal, image 2 shows ddx(normal) and image 3 shows ddy(normal), where the ddx and ddy are scaled so that the result is visible. If the per pixel normals display smoothly, due to interpolation between the vertex normals, why dont the partial derivatives?

With the sphere, and dndx, I would have expected the change in normal with respect to the x screen space coordinate to smoothly increase as you go around the circumference starting at the center of the sphere image within the image.


float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = pIn.normalW;
float3 dndx = ddx( pIn.normalW );
float3 dndy = ddy( pIn.normalW );

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}


3583x4z.jpg
mkxkl3.jpg
vhs13q.jpg
Advertisement
Well, intuitively, that's about what I'd expect. The per-vertex normals get interpolated to appear smooth. However, they're only going to be C0 continuous (I think that's the right term). Basically, the rasterized polygons are each going to have smooth normals across the face. Along the edges between faces, things will line up, because the triangle edges are rasterized the same on both sides of the edge. However, as soon as you cross that edge, you're into a new polygon. That polygon is going to have a different slope (in screen space), because the normal is now changing faster or slower (due to the relative sizes of the polygons). Using ddx and ddy, you're simply doing that differentation, so you see the effects of the linear interpolation performed by the rasterizer.
The ddx and ddy only give you the rate of change of a parameter across a pixel quad.

So, for example, if your quad is 2x2 then each of those 4 pixels will get the same value for ddx and ddy regardless of the per-pixel real rate of change.

DX11 does introduce a precise ddx and ddy set of functions which might give 'better' results but of course cost more.
Well, intuitively, that's about what I'd expect. The per-vertex normals get interpolated to appear smooth. However, they're only going to be C0 continuous (I think that's the right term). Basically, the rasterized polygons are each going to have smooth normals across the face. Along the edges between faces, things will line up, because the triangle edges are rasterized the same on both sides of the edge. However, as soon as you cross that edge, you're into a new polygon. That polygon is going to have a different slope (in screen space), because the normal is now changing faster or slower (due to the relative sizes of the polygons). Using ddx and ddy, you're simply doing that differentation, so you see the effects of the linear interpolation performed by the rasterizer. [/quote]

But becuase the normals are interpolated, then shouldnt the pixel normal on either side of an edge, when viewed up close, be almost identical? Meaning that the ddx(n) of each pixel quad on either side of an edge be on slightly different?

Well, intuitively, that's about what I'd expect. The per-vertex normals get interpolated to appear smooth. However, they're only going to be C0 continuous (I think that's the right term). Basically, the rasterized polygons are each going to have smooth normals across the face. Along the edges between faces, things will line up, because the triangle edges are rasterized the same on both sides of the edge. However, as soon as you cross that edge, you're into a new polygon. That polygon is going to have a different slope (in screen space), because the normal is now changing faster or slower (due to the relative sizes of the polygons). Using ddx and ddy, you're simply doing that differentation, so you see the effects of the linear interpolation performed by the rasterizer.


But becuase the normals are interpolated, then shouldnt the pixel normal on either side of an edge, when viewed up close, be almost identical? Meaning that the ddx(n) of each pixel quad on either side of an edge be on slightly different?
[/quote]

I'm not sure I follow. Are you just wondering about the effect right along polygon edges, or the faceted look of the whole thing? For the overall effect, it looks like you're drawing a sphere using ~16 segments. So the normals in each segment need to change by a constant amount (360 / 16 degrees). Within each segment, they're going to change at a constant rate in screen space: (360 degrees / 16) / [width in pixels of polygon]. That's highly sensitive to the screen-space coverage of the segments - and you can see this in your images. The segments near the 'front' of the sphere have a much smaller change in the dndx color than those near the edge. As you approach the silhouette of the sphere, the rasterized size of the segments approaches zero very quickly.

For another way of thinking - you're trying to differentiate a function (the sphere normal). It *looks* like it's a smooth/continuous function, but it isn't. Because of the polygons and the rasterizer, it's actually a piecewise-linear approximation of the true result. When you differentiate a piecewise-linear function, you get a step function that is just the constant slope of each linear segment...
Let's consider we have 3 vertices. For simplicity we'll take this into 1D plane.

One vertex in the left (x=0), one in the middle (x=width/2), one in the right (x=width)

Mathematically, at x < width/2:

pixelNormal[x] = normal0 * x + normal1 * (width/2 - x);

pixelNormal[x+1] = normal0 * (x+1) + normal1 * (width/2 - x + 1);


At any point x < width/2

ddx(normal) = pixelNormal[x] - pixelNormal[x+1];


However since normal0 & normal1 are always the same, you get:

pixelNormal[x] - pixelNormal[x+1] = pixelNormal[x+1] - pixelNormal[x+2] = pixelNormal[x+2] - pixelNormal[x+3]

(normal0 * x + normal1 * (width/2 - x)) - (normal0 * (x+1) + normal1 * (width/2 - (x+1))) = (normal0 * (x+1) + normal1 * (width/2 - (x+1))) - (normal0 * (x+2) + normal1 * (width/2 - (x+2)))


If you simplify the expresion you'll end up with:

ddx( normal ) = - normal0 + normal1

For all pixels between x=0 && x<width/2


Between pixels x>width/2 && x<width you'll get:

pixelNormal[x] = normal1 * x + normal2 * (width/2 - x);

//Which following the same process ends up as:

ddx( normal ) = - normal1 + normal2




Hence, the change in colour when switching to a different interval.


This is why you get such discrete colours. The formula becomes constant for each pixel between each set of vertices. By definition, you will never get smooth colours unless you use one vertex per pixel; not at least the way you're doing it. The actual process is a little more complex because it's 2D and involves interpolating between 3 vertices, not 2; but the end result is the same.
So considering a 1d piecewise funcion, with points x0 to x3, and values v0 to v3, that are unevenly spaced out (as would be the case of same sized shapes being projected onto the near plane having been multiplied by projection matrix)
v2
v1
v0 v3
x0-------x1----------------------x2--------x3



the rate of change between two points, ie a segment, say between x0 and x1, will smoothly change (when the values are interpolated), but the rate of change when crossing a vertex and entering a new segment will now be considering x1 to x2, and although the value change from v0 to v1 is the same as v1 to v2, the distance between the x points x0tox1 and x1tox2 is different, giving a a big change in derivative, for the calculated derivative of the pixel on the left and right of x1, producing the look of a discrete change in the images
Indeed, the rate of change maintains constant between the two vertices. That's because mathematically everything that varies per pixel is canceling each other. If you manage to put some variation to the formula so that elements unique to each pixel remains, you may get smooth results as you wanted.

Try normalizing the normal before dd'ing, that may do the trick:

float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = normalize( pIn.normalW );
float3 dndx = ddx( n);
float3 dndy = ddy( n);

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}

Indeed, the rate of change maintains constant between the two vertices. That's because mathematically everything that varies per pixel is canceling each other. If you manage to put some variation to the formula so that elements unique to each pixel remains, you may get smooth results as you wanted.

Try normalizing the normal before dd'ing, that may do the trick:

float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = normalize( pIn.normalW );
float3 dndx = ddx( n);
float3 dndy = ddy( n);

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}



Itried that and it didnt help, but going by your description, ...how would that help?

Another thing i was wondering about is how the pixel quads are implemented. If I have i have 9 pixels in my buffer

1 2 3
4 5 6
7 8 9

Then pixel 7's ddx is 8-7 and its ddy is 4-7. Now, is pixel 8's ddx 9-8 and its ddy 5-8? The reason I ask is becuase wouldnt that mean that all pixels are dependant on neighbours and so would prevent a lot of parallel task properties?

Or is it the case that pixels 4,5,7,8 all share the same ddx (9-8) and all share the same ddy (4-7). Mean that you only get dependencies in pixel quads, but is less accurate.

Another thing i was wondering about is how the pixel quads are implemented. If I have i have 9 pixels in my buffer

1 2 3
4 5 6
7 8 9

Then pixel 7's ddx is 8-7 and its ddy is 4-7. Now, is pixel 8's ddx 9-8 and its ddy 5-8? The reason I ask is becuase wouldnt that mean that all pixels are dependant on neighbours and so would prevent a lot of parallel task properties?

Or is it the case that pixels 4,5,7,8 all share the same ddx (9-8) and all share the same ddy (4-7). Mean that you only get dependencies in pixel quads, but is less accurate.


You are close with your second guess. Pixels 4 and 5 have ddx = 5 - 4, and pixels 7 and 8 have ddx = 8-7. Similarly, pixels 4 and 7 have ddy = 7 - 4, and pixels 5 and 8 have ddy = 8 - 5. That way all processing stays within a quad, but no two pixels in a quad have exactly the same ddx and ddy.
"Math is hard" -Barbie

This topic is closed to new replies.

Advertisement