Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualirreversible

Posted 18 February 2013 - 03:30 AM

I implemented a rather slack and simple version of SDF's on the GPU and the results I'm getting are actually pretty good for sources that don't have fine detail or sharp corners. However, the transformation is not perfect and while I wouldn't expect a hack like the one I'm using to be 100%  accurate, I don't really see why it couldn't be in theory. I kind of came up with this myself simply because I wanted to write a fast approximation that'd be easy to code. So it's possible I'm missing some of the vital theory.
 
What I'm doing is:
 
1) set up ping-pong FBO targets and fill them with white. I'm using floating point targets for added precision.
2) guess a radius for the distance kernel. I'm using r = 10
3) draw the source texture (r*2)^2 (for r = 10 => 400) times from (x, y) = [-r, -r] to [r, r], offsetting texture coordinates by (x, y) pixels for every iteration; set d to the normalized distance from the current static origin: d = sqrt(x * x + y * y) / r. I'm also clipping all pixels that fall outside of the normalized unit circle (because the sampling area is square and distances >1 would be clipped anyway)
4) use the ping-pong textures to update distance in the active map if it's smaller than the previous recorded distance and the pixel lies inside the shifted source texture:
 
    if(cur.x < 1)
        f = min(prv.x, d * 0.5 + 0.5);
    else    
        {
        if(cur2.x < 1)
            f = min(prv.x, d * 0.5);
        else
            f = prv.x;
        }
 
where cur is the shifted source texture, cur2 is the unshifted source texture and prv is the alternate ping pong texture. Distance is packed to the range [0, 1]
5) "invert" distance inside objects to fix the sign
 
At this point the resulting image is a relatively crude approximation of the shape in that it's somewhat wobbly. To fix this I:
 
6) run hor+ver Gaussian blur over the image
7) and finally introduce a contrast compensation factor (empirical tests led me to a value of 1.45), as part of the blurring
 
The results can be seen below: the top image is from a CPU approach (not my code and I'm unsure of the running time); the below is my GPU transform (~0.4 sec for calculation only on a 640M in debug mode). This is a zoomed in region of an unscaled 1024x1024 SDF texture.
 
[attachment=13688:distcomp.jpg]
 
I haven't gotten to scaling the SDF yet - I'm first trying to figure out whether chasing pixel perfection on the GPU is a wild goose chase or not. I mean, I'm perfectly fine with using an approximation for stuff like shadowmaps, but fonts and high-detail stencils still need a more accurate approach.
 
To recap - can anyone share their experience on this? Eg if, then why wouldn't exact SDF transform be possible using something like the above approach, because the distance metric is actually correct, as is clipping and sign calculation/packing?
 
PS - the wobble may also be introduced by an error that I can't quite put my finger on yet. When inverting the negative distances (d = (0.5 - d)), using 0.5 actually creates banding at the perimeter. Using 0.56 introduces an appropriate shift, but I can't really explain why this is needed in the first place...
 
EDIT: naturally it dawned on my while I was in the toilet, but the 0.06 offset is perfectly logical, because the pixel is inside the perimeter, not on the boundary. Hence the inversion is performed on r-1 pixels, making the offset factor 0.5 + 0.5 / (r - 1) = 0.555(5).

#3irreversible

Posted 18 February 2013 - 02:19 AM

I implemented a rather slack and simple version of SDF's on the GPU and the results I'm getting are actually pretty good for sources that don't have fine detail or sharp corners. However, the transformation is not perfect and while I wouldn't expect a hack like the one I'm using to be 100%  accurate, I don't really see why it couldn't be in theory. I kind of came up with this myself because I wanted to write a fast approximation, so it's possible I'm missing some of the vital theory.
 
What I'm doing is:
 
1) set up ping-pong FBO targets and fill them with white. I'm using floating point targets for added precision.
2) guess a radius for the distance kernel. I'm using r = 10
3) draw the source texture (r*2)^2 (for r = 10 => 400) times from (x, y) = [-r, -r] to [r, r], offsetting texture coordinates by (x, y) pixels for every iteration; set d to the normalized distance from the current static origin: d = sqrt(x * x + y * y) / r. I'm also clipping all pixels that fall outside of the normalized unit circle (because the sampling area is square and distances >1 would be clipped anyway)
4) use the ping-pong textures to update distance in the active map if it's smaller than the previous recorded distance and the pixel lies inside the shifted source texture:
 
    if(cur.x < 1)
        f = min(prv.x, d * 0.5 + 0.5);
    else    
        {
        if(cur2.x < 1)
            f = min(prv.x, d * 0.5);
        else
            f = prv.x;
        }
 
where cur is the shifted source texture, cur2 is the unshifted source texture and prv is the alternate ping pong texture. Distance is packed to the range [0, 1]
5) "invert" distance inside objects to fix the sign
 
At this point the resulting image is a relatively crude approximation of the shape in that it's somewhat wobbly. To fix this I:
 
6) run hor+ver Gaussian blur over the image
7) and finally introduce a contrast compensation factor (empirical tests led me to a value of 1.45), as part of the blurring
 
The results can be seen below: the top image is from a CPU approach (not my code and I'm unsure of the running time); the below is my GPU transform (~0.4 sec for calculation only on a 640M in debug mode). This is a zoomed in region of an unscaled 1024x1024 SDF texture.
 
[attachment=13688:distcomp.jpg]
 
I haven't gotten to scaling the SDF yet - I'm first trying to figure out whether chasing pixel perfection on the GPU is a wild goose chase or not. I mean, I'm perfectly fine with using an approximation for stuff like shadowmaps, but fonts and high-detail stencils still need a more accurate approach.
 
To recap - can anyone share their experience on this? Eg if, then why wouldn't exact SDF transform be possible using something like the above approach on the GPU, because distance metric is actually correct, as is clipping and sign calculation/packing?
 
PS - the wobble may also be introduced by an error that I can't quite put my finger on yet. When inverting the negative distances (d = (0.5 - d)), using 0.5 actually creates banding at the perimeter. Using 0.56 introduces an appropriate shift, but I can't really explain why this is needed in the first place...
 
EDIT: naturally it dawned on my while I was in the toilet, but the 0.06 offset is perfectly logical, because the pixel is inside the perimeter, not on the boundary. Hence the inversion is performed on r-1 pixels, making the offset factor 0.5 + 0.5 / (r - 1) = 0.555(5).

#2irreversible

Posted 18 February 2013 - 12:37 AM

I implemented a rather slack and simple version of SDF's on the GPU and the results I'm getting are actually pretty good for sources that don't have fine detail or sharp corners. However, the transformation is not perfect and while I wouldn't expect a hack like the one I'm using to be 100%  accurate, I don't really see why it couldn't be in theory. I kind of came up with this myself because I wanted to write a fast approximation, so it's possible I'm missing some of the vital theory.
 
What I'm doing is:
 
1) set up ping-pong FBO targets and fill them with white. I'm using floating point targets for added precision.
2) guess a radius for the distance kernel. I'm using r = 10
3) draw the source texture (r*2)^2 (for r = 10 => 400) times from (x, y) = [-r, -r] to [r, r], offsetting texture coordinates by (x, y) pixels for every iteration; set d to the normalized distance from the current static origin: d = sqrt(x * x + y * y) / r. I'm also clipping all pixels that fall outside of the normalized unit circle (because the sampling area is square and distances >1 would be clipped anyway)
4) use the ping-pong textures to update distance in the active map if it's smaller than the previous recorded distance and the pixel lies inside the shifted source texture:
 
    if(cur.x < 1)
        f = min(prv.x, d * 0.5 + 0.5);
    else    
        {
        if(cur2.x < 1)
            f = min(prv.x, d * 0.5);
        else
            f = prv.x;
        }
 
where cur is the shifted source texture, cur2 is the unshifted source texture and prv is the alternate ping pong texture. Distance is packed to the range [0, 1]
5) "invert" distance inside objects to fix the sign
 
At this point the resulting image is a relatively crude approximation of the shape in that it's somewhat wobbly. To fix this I:
 
6) run hor+ver Gaussian blur over the image
7) and finally introduce a contrast compensation factor (empirical tests led me to a value of 1.45), as part of the blurring
 
The results can be seen below: the top image is from a CPU approach (not my code and I'm unsure of the running time); the below is my GPU transform (~0.4 sec for calculation only on a 640M in debug mode). This is a zoomed in region of an unscaled 1024x1024 SDF texture.
 
[attachment=13688:distcomp.jpg]
 
I haven't gotten to scaling the SDF yet - I'm first trying to figure out whether chasing pixel perfection on the GPU is a wild goose chase or not. I mean, I'm perfectly fine with using an approximation for stuff like shadowmaps, but fonts and high-detail stencils still need a more accurate approach.
 
To recap - can anyone share their experience on this? Eg if, then why wouldn't exact SDF transform be possible using something like the above approach on the GPU, because distance metric is actually correct, as is clipping and sign calculation/packing?
 
PS - the wobble may also be introduced by an error that I can't quite put my finger on yet. When inverting the negative distances (d = (0.5 - d)), using 0.5 actually creates banding at the perimeter. Using 0.56 introduces an appropriate shift, but I can't really explain why this is needed in the first place...

#1irreversible

Posted 18 February 2013 - 12:31 AM

I implemented a rather slack and simple version of SDF's on the GPU and the results I'm getting are actually pretty good for sources that don't have fine detail or sharp corners. However, the transformation is not perfect and while I wouldn't expect a hack like the one I'm using to be 100%  accurate, I don't really see why it couldn't be in theory. I kind of came up with this myself because I wanted to write a fast approximation, so it's possible I'm missing some of the vital theory.
 
What I'm doing is:
 
1) set up ping-pong FBO targets and fill them with white. I'm using floating point targets for added precision.
2) guess a radius for the distance kernel. I'm using r = 10
3) draw the source texture (r*2)^2 (for r = 10 => 400) times from (x, y) = [-r, -r] to [r, r], offsetting texture coordinates by (x, y) pixels for every iteration; set d to the normalized distance from the current static origin: d = sqrt(x * x + y * y) / r. I'm also clipping all pixels that fall outside of the normalized unit circle (because the sampling area is square and distances >1 would be clipped anyway)
4) use the ping-pong textures to update distance in the active map if it's smaller than the previous recorded distance and the pixel lies inside the shifted source texture:
 
    if(cur.x < 1)
        f = min(prv.x, d * 0.5 + 0.5);
    else    
        {
        if(cur2.x < 1)
            f = min(prv.x, d * 0.5);
        else
            f = prv.x;
        }
 
where cur is shifted source texture, cur2 is unshifted source texture and prv is the alternate ping pong texture. Distance is packed to the range [0, 1]
5) "invert" distance inside objects to fix the sign
 
At this point the resulting image is a relatively crude approximation of the shape in that it's somewhat wobbly. To fix this I:
 
6) run hor+ver Gaussian blur over the image
7) and finally introduce a contrast compensation factor (empirical tests led me to a value of 1.45), as part of the blurring
 
The results can be seen below: the top image is from a CPU approach (not my code and I'm unsure of the running time); the below is my GPU transform (~0.4 sec for calculation only on a 640M in debug mode). This is a zoomed in region of an unscaled 1024x1024 SDF texture.
 
[attachment=13688:distcomp.jpg]
 
I haven't gotten to scaling the SDF yet - I'm first trying to figure out whether chasing pixel perfection on the GPU is a wild goose chase or not. I mean, I'm perfectly fine with using an approximation for stuff like shadowmaps, but fonts and high-detail stencils still need a more accurate approach.
 
To recap - can anyone share their experience on this? Eg if, then why wouldn't exact SDF transform be possible using something like the above approach on the GPU, because distance metric is actually correct, as is clipping and sign calculation/packing?
 
PS - the wobble may also be introduced by an error that I can't quite put my finger on yet. When inverting the negative distances (d = (0.5 - d)), using 0.5 actually creates banding at the perimeter. Using 0.56 seems introduces an appropriate shift, but I can't really explain why this is needed in the first place...

PARTNERS