Sign in to follow this  
hick18

Directx ddx/ddy

Recommended Posts

Im trying to understand some of the results Im getting using the ddx and ddy functions. Image 1 shows the normal, image 2 shows ddx(normal) and image 3 shows ddy(normal), where the ddx and ddy are scaled so that the result is visible. If the per pixel normals display smoothly, due to interpolation between the vertex normals, why dont the partial derivatives?

With the sphere, and dndx, I would have expected the change in normal with respect to the x screen space coordinate to smoothly increase as you go around the circumference starting at the center of the sphere image within the image.

[code]
float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = pIn.normalW;
float3 dndx = ddx( pIn.normalW );
float3 dndy = ddy( pIn.normalW );

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}[/code]

[img]http://i51.tinypic.com/3583x4z.jpg[/img]
[img]http://i51.tinypic.com/mkxkl3.jpg[/img]
[img]http://i56.tinypic.com/vhs13q.jpg[/img]

Share this post


Link to post
Share on other sites
Well, intuitively, that's about what I'd expect. The per-vertex normals get interpolated to appear smooth. However, they're only going to be C0 continuous (I think that's the right term). Basically, the rasterized polygons are each going to have smooth normals across the face. Along the edges between faces, things will line up, because the triangle edges are rasterized the same on both sides of the edge. However, as soon as you cross that edge, you're into a new polygon. That polygon is going to have a different slope (in screen space), because the normal is now changing faster or slower (due to the relative sizes of the polygons). Using ddx and ddy, you're simply doing that differentation, so you see the effects of the linear interpolation performed by the rasterizer.

Share this post


Link to post
Share on other sites
The ddx and ddy only give you the rate of change of a parameter across a pixel quad.

So, for example, if your quad is 2x2 then each of those 4 pixels will get the same value for ddx and ddy regardless of the per-pixel real rate of change.

DX11 does introduce a precise ddx and ddy set of functions which might give 'better' results but of course cost more.

Share this post


Link to post
Share on other sites
[quote]Well, intuitively, that's about what I'd expect. The per-vertex normals get interpolated to appear smooth. However, they're only going to be C0 continuous (I think that's the right term). Basically, the rasterized polygons are each going to have smooth normals across the face. Along the edges between faces, things will line up, because the triangle edges are rasterized the same on both sides of the edge. However, as soon as you cross that edge, you're into a new polygon. That polygon is going to have a different slope (in screen space), because the normal is now changing faster or slower (due to the relative sizes of the polygons). Using ddx and ddy, you're simply doing that differentation, so you see the effects of the linear interpolation performed by the rasterizer. [/quote]

But becuase the normals are interpolated, then shouldnt the pixel normal on either side of an edge, when viewed up close, be almost identical? Meaning that the ddx(n) of each pixel quad on either side of an edge be on slightly different?

Share this post


Link to post
Share on other sites
[quote name='hick18' timestamp='1302455409' post='4796732']
[quote]Well, intuitively, that's about what I'd expect. The per-vertex normals get interpolated to appear smooth. However, they're only going to be C0 continuous (I think that's the right term). Basically, the rasterized polygons are each going to have smooth normals across the face. Along the edges between faces, things will line up, because the triangle edges are rasterized the same on both sides of the edge. However, as soon as you cross that edge, you're into a new polygon. That polygon is going to have a different slope (in screen space), because the normal is now changing faster or slower (due to the relative sizes of the polygons). Using ddx and ddy, you're simply doing that differentation, so you see the effects of the linear interpolation performed by the rasterizer. [/quote]

But becuase the normals are interpolated, then shouldnt the pixel normal on either side of an edge, when viewed up close, be almost identical? Meaning that the ddx(n) of each pixel quad on either side of an edge be on slightly different?
[/quote]

I'm not sure I follow. Are you just wondering about the effect right along polygon edges, or the faceted look of the whole thing? For the overall effect, it looks like you're drawing a sphere using ~16 segments. So the normals in each segment need to change by a constant amount (360 / 16 degrees). Within each segment, they're going to change at a constant rate in screen space: (360 degrees / 16) / [width in pixels of polygon]. That's highly sensitive to the screen-space coverage of the segments - and you can see this in your images. The segments near the 'front' of the sphere have a much smaller change in the dndx color than those near the edge. As you approach the silhouette of the sphere, the rasterized size of the segments approaches zero very quickly.

For another way of thinking - you're trying to differentiate a function (the sphere normal). It *looks* like it's a smooth/continuous function, but it isn't. Because of the polygons and the rasterizer, it's actually a piecewise-linear approximation of the true result. When you differentiate a piecewise-linear function, you get a step function that is just the constant slope of each linear segment...

Share this post


Link to post
Share on other sites
Let's consider we have 3 vertices. For simplicity we'll take this into 1D plane.

One vertex in the left (x=0), one in the middle (x=width/2), one in the right (x=width)

Mathematically, at x < width/2:

[code]pixelNormal[x] = normal0 * x + normal1 * (width/2 - x);

pixelNormal[x+1] = normal0 * (x+1) + normal1 * (width/2 - x + 1);[/code]

At [i]any[/i] point x < width/2

[code]ddx(normal) = pixelNormal[x] - pixelNormal[x+1];[/code]


However since normal0 & normal1 are always the same, you get:

[code]pixelNormal[x] - pixelNormal[x+1] = pixelNormal[x+1] - pixelNormal[x+2] = pixelNormal[x+2] - pixelNormal[x+3]

(normal0 * x + normal1 * (width/2 - x)) - (normal0 * (x+1) + normal1 * (width/2 - (x+1))) = (normal0 * (x+1) + normal1 * (width/2 - (x+1))) - (normal0 * (x+2) + normal1 * (width/2 - (x+2)))[/code]

If you simplify the expresion you'll end up with:

[code]ddx( normal ) = - normal0 + normal1[/code]

For all pixels between x=0 && x<width/2


Between pixels x>width/2 && x<width you'll get:

[code]pixelNormal[x] = normal1 * x + normal2 * (width/2 - x);

//Which following the same process ends up as:

ddx( normal ) = - normal1 + normal2[/code]



Hence, the change in colour when switching to a different interval.


This is why you get such discrete colours. The formula becomes constant for each pixel between each set of vertices. By definition, you will never get smooth colours unless you use one vertex per pixel; not at least the way you're doing it. The actual process is a little more complex because it's 2D and involves interpolating between 3 vertices, not 2; but the end result is the same.

Share this post


Link to post
Share on other sites
So considering a 1d piecewise funcion, with points x0 to x3, and values v0 to v3, that are unevenly spaced out (as would be the case of same sized shapes being projected onto the near plane having been multiplied by projection matrix)
[code] v2
v1
v0 v3
x0-------x1----------------------x2--------x3[/code]


the rate of change between two points, ie a segment, say between x0 and x1, will smoothly change (when the values are interpolated), but the rate of change when crossing a vertex and entering a new segment will now be considering x1 to x2, and although the value change from v0 to v1 is the same as v1 to v2, the distance between the x points x0tox1 and x1tox2 is different, giving a a big change in derivative, for the calculated derivative of the pixel on the left and right of x1, producing the look of a discrete change in the images

Share this post


Link to post
Share on other sites
Indeed, the rate of change maintains constant between the two vertices. That's because mathematically everything that varies per pixel is canceling each other. If you manage to put some variation to the formula so that elements unique to each pixel remains, you may get smooth results as you wanted.

Try normalizing the normal before dd'ing, that may do the trick:

[code]float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = normalize( pIn.normalW );
float3 dndx = ddx( n);
float3 dndy = ddy( n);

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}[/code]

Share this post


Link to post
Share on other sites
[quote name='Matias Goldberg' timestamp='1302529954' post='4797119']
Indeed, the rate of change maintains constant between the two vertices. That's because mathematically everything that varies per pixel is canceling each other. If you manage to put some variation to the formula so that elements unique to each pixel remains, you may get smooth results as you wanted.

Try normalizing the normal before dd'ing, that may do the trick:

[code]float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = normalize( pIn.normalW );
float3 dndx = ddx( n);
float3 dndy = ddy( n);

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}[/code]
[/quote]

Itried that and it didnt help, but going by your description, ...how would that help?

Another thing i was wondering about is how the pixel quads are implemented. If I have i have 9 pixels in my buffer

1 2 3
4 5 6
7 8 9

Then pixel 7's ddx is 8-7 and its ddy is 4-7. Now, is pixel 8's ddx 9-8 and its ddy 5-8? The reason I ask is becuase wouldnt that mean that all pixels are dependant on neighbours and so would prevent a lot of parallel task properties?

Or is it the case that pixels 4,5,7,8 all share the same ddx (9-8) and all share the same ddy (4-7). Mean that you only get dependencies in pixel quads, but is less accurate.

Share this post


Link to post
Share on other sites
[quote name='hick18' timestamp='1302605867' post='4797460']
Another thing i was wondering about is how the pixel quads are implemented. If I have i have 9 pixels in my buffer

1 2 3
4 5 6
7 8 9

Then pixel 7's ddx is 8-7 and its ddy is 4-7. Now, is pixel 8's ddx 9-8 and its ddy 5-8? The reason I ask is becuase wouldnt that mean that all pixels are dependant on neighbours and so would prevent a lot of parallel task properties?

Or is it the case that pixels 4,5,7,8 all share the same ddx (9-8) and all share the same ddy (4-7). Mean that you only get dependencies in pixel quads, but is less accurate.
[/quote]

You are close with your second guess. Pixels 4 and 5 have ddx = 5 - 4, and pixels 7 and 8 have ddx = 8-7. Similarly, pixels 4 and 7 have ddy = 7 - 4, and pixels 5 and 8 have ddy = 8 - 5. That way all processing stays within a quad, but no two pixels in a quad have exactly the same ddx and ddy.

Share this post


Link to post
Share on other sites
[quote name='hick18' timestamp='1302373783' post='4796427'].

[code]
float4 PS(VS_OUT pIn) : SV_Target
{
float3 n = pIn.normalW;
float3 dndx = ddx( pIn.normalW );
float3 dndy = ddy( pIn.normalW );

float sn = 100.0f;
return float4( n, 1 );
return float4( sn*dndx, 1 );
return float4( sn*dndy, 1 );
}[/code][/quote]

sorry, I didn't read all the thread, but the simple solution sould be to normalize pIn.normalW before passing to ddx ddy


Share this post


Link to post
Share on other sites
[quote name='hick18' timestamp='1302629486' post='4797572']
Thankyou. Where did you learn this?
[/quote]

I worked at nvidia for a while (though it was quite some time ago). But it's also been discussed before on this forum, see [url="http://www.gamedev.net/topic/478820-derivative-instruction-details-ddx-ddy-or-dfdx-dfdy-etc/"]http://www.gamedev.net/topic/478820-derivative-instruction-details-ddx-ddy-or-dfdx-dfdy-etc/[/url].

Share this post


Link to post
Share on other sites
Actually the Nvidia derivatives are just one type used today. ATI cards actually only uses three values (the top left corner of the quad) to determine the derivatives for the entire quad. There is an in depth discussion of this in the GPU Pro 2 book chapter "Shader Amortization using Pixel Quad Message Passing". That chapter also discusses how you can use the Nvidia-style derivatives to do 1/4 of the work in some cases (eg. 1/4 the shadow map samples).

For fine/coarse derivatives, I figure those are there to specify between the two types (Nvidia=fine, ATI=coarse). To get even more accurate derivatives you would need to look at values outside the pixel quad, which would seem to be quite a challenge given that quads are rendered in parallel.





Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this