# Distance Field Textures

This topic is 2904 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi there,

I'm currently trying to implement a technique described in a SIGGRAPH paper from folks at valve (http://www.valvesoft...gnification.pdf).

Unfortunately I can't seem to reproduce the results they're getting with the algorithm they suggest, and was wondering if I could get any pointers.

Currently I'm trying to reduce a 4096x4069 texture down to 64x64, to do this I :

1. create a 64 x 64 destination image
2. loop through the pixels in the destination image, and calculate the signed distance for each corresponding pixel in the source image

 reductionFactor = 64; kernalSize = (m_sourceImage.Width / reductionFactor) * 8 for( int x = 0; x < m_destinationImage.Width; x++ ) { for( int y = 0; y < m_destinationImage.Height; y++ ) { float distance = FindSignedDistance( (x * reductionFactor) + reductionFactor / 2, (y * reductionFactor) + reductionFactor / 2, kernalSize); m_distanceField[x,y] = distance; } } 

3. The scan radius (or kernal size) I use is the width/height of the source image, divide by the reduction factor and multiplied by 8... so for a 4096x4096 image down to 64x64 I've got a a kernal size of 512 pixels.
4. After the distance calculations are complete I normalize the distance values
 for( int x = 0; x < m_destinationImage.Width; x++ ) { for( int y = 0; y < m_destinationImage.Height; y++ ) { float distance = m_distanceField[x,y]; if( distance == Single.MaxValue ) { distance = 1.0f; } else if( distance == Single.MinValue ) { distance = 0.0f; } else { distance = distance * 0.5f + 0.5f; } m_distanceField[x,y] = distance; } } 

5. Finally I convert the distance field values in to a colour, storing the result in the red colour channel (as alpha in dxt compression for consoles can get screwed up).

I'm getting image results like this though:

If you zoom in on it you'll see that the pixel spread is nothing like what the valve people are getting...

Any ideas?

##### Share on other sites
I think it looks kind of right, but there is a lot of noise that shouldn't be in there, is something wrong with FindSignedDistance perhaps? And also, if the pixel spread is off then it's just a matter of changing the normalization (use a wider range).

Anyway, all you really need to do is, for each pixel position in the small image, transform that into the correct pixel position of the larger image (should be in the center!), then you calculate the distance to nearest "lit pixel". I'm guessing you might possibly also want to do the reverse, if you start on a "lit pixel", calculate the distance to the closest "unlit" pixel. And then you normalize the distance value to something suitable. That's all there is to it really unless I've missed something.

##### Share on other sites
Note that the normalized float value gets quantized to 8 bits per channel (unless you use a floating point texture, which you don't). That means you can only meaningfully represent a 1024*1024 source texture in your 64*64 distance texture (64*64*256 == 1024*1024). Also DXT may not be the best thing for something where compression artefacts are massively amplified in the end result, a better solution may be to instead use a single-channel uncompressed texture.

##### Share on other sites

4. After the distance calculations are complete I normalize the distance values
 float distance = m_distanceField[x,y]; if( distance == Single.MaxValue ) distance = 1.0f; else if( distance == Single.MinValue ) distance = 0.0f; else distance = distance * 0.5f + 0.5f; 

is that really what you want? shouldn't you rather check distance>MaxValue and distance<MinValue?

##### Share on other sites
Good points thanks folks! I'll have another crack at it this afternoon

##### Share on other sites

I think it looks kind of right, but there is a lot of noise that shouldn't be in there, is something wrong with FindSignedDistance perhaps? And also, if the pixel spread is off then it's just a matter of changing the normalization (use a wider range).

Anyway, all you really need to do is, for each pixel position in the small image, transform that into the correct pixel position of the larger image (should be in the center!), then you calculate the distance to nearest "lit pixel". I'm guessing you might possibly also want to do the reverse, if you start on a "lit pixel", calculate the distance to the closest "unlit" pixel. And then you normalize the distance value to something suitable. That's all there is to it really unless I've missed something.

yeah that's pretty much what I'm doing... I think

 Color sourceTexel = m_sourceImage.GetPixel( pointX, pointY ); bool isIn = sourceTexel.R > 0; bool foundTexel = false; float closestDistance = Single.MaxValue; // Calculate the scan kernal for the current pixel int kernal = (int)Math.Floor((double)scanRadius / 2); int startX = pointX - kernal; int endX = pointX + kernal; int startY = pointY - kernal; int endY = pointY + kernal; startX = Clamp(startX, 0, m_sourceImage.Width); endX = Clamp(endX, 0, m_sourceImage.Width); startY = Clamp(startY, 0, m_sourceImage.Height); endY = Clamp(endY, 0, m_sourceImage.Height); // Loop through the kernal, looking for texels of the opposite colour to the source texel for (int x = startX; x < endX; x++) { for (int y = startY; y < endY; y++) { Color currentTexel = m_sourceImage.GetPixel(x, y); if (currentTexel.R > 0 && !isIn || currentTexel.R == 0 && isIn) { float distance = CalculateDistance(pointX, pointY, x, y); if (distance < closestDistance) { closestDistance = distance; foundTexel = true; } } } } // If the source texel is white if (isIn) { // And an inner edge was found if( foundTexel ) { return (closestDistance / scanRadius); } else { return Single.MaxValue; } } // Otherwise if the source texel was black else { // And an outer edge was found if (foundTexel) { return (closestDistance / scanRadius) * -1.0f; } else { return Single.MinValue; } } 

##### Share on other sites

Note that the normalized float value gets quantized to 8 bits per channel (unless you use a floating point texture, which you don't). That means you can only meaningfully represent a 1024*1024 source texture in your 64*64 distance texture (64*64*256 == 1024*1024). Also DXT may not be the best thing for something where compression artefacts are massively amplified in the end result, a better solution may be to instead use a single-channel uncompressed texture.

I end up converting the distance value to an RGB value when I generate my bitmap image

 byte value = (byte)Math.Round( distance * 255 ); Color destinationColour = Color.FromArgb( 255, value, 255, 255 ); m_destinationImage.SetPixel( x, y, destinationColour ); 

Unfortunately DXT is a requirement (I know it's not ideal) as these texture will be used on the 360.

##### Share on other sites

is that really what you want? shouldn't you rather check distance>MaxValue and distance<MinValue?

I'm pretty sure that's what I want to be doing, my FindSignedDistance function returns Single.Max for a white pixel and Single.Min for a black pixel, a distance value for anything else... this should map them to the right 0-1 colour range... although my normalisation for the other distance values may be off.

##### Share on other sites

yeah that's pretty much what I'm doing... I think

Ignore the division at the end by the scan range - it's legacy code from when I was trying to create a distance map using the same dimensions as the source image and bi-linear downsampling

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 15
• 11
• 9
• 11
• 15