I'm currently trying to implement a technique described in a SIGGRAPH paper from folks at valve (http://www.valvesoft...gnification.pdf).
Unfortunately I can't seem to reproduce the results they're getting with the algorithm they suggest, and was wondering if I could get any pointers.
Currently I'm trying to reduce a 4096x4069 texture down to 64x64, to do this I :
1. create a 64 x 64 destination image
2. loop through the pixels in the destination image, and calculate the signed distance for each corresponding pixel in the source image
reductionFactor = 64;
kernalSize = (m_sourceImage.Width / reductionFactor) * 8
for( int x = 0; x < m_destinationImage.Width; x++ )
{
for( int y = 0; y < m_destinationImage.Height; y++ )
{
float distance = FindSignedDistance( (x * reductionFactor) + reductionFactor / 2, (y * reductionFactor) + reductionFactor / 2, kernalSize);
m_distanceField[x,y] = distance;
}
}
3. The scan radius (or kernal size) I use is the width/height of the source image, divide by the reduction factor and multiplied by 8... so for a 4096x4096 image down to 64x64 I've got a a kernal size of 512 pixels.
4. After the distance calculations are complete I normalize the distance values
for( int x = 0; x < m_destinationImage.Width; x++ )
{
for( int y = 0; y < m_destinationImage.Height; y++ )
{
float distance = m_distanceField[x,y];
if( distance == Single.MaxValue )
{
distance = 1.0f;
}
else if( distance == Single.MinValue )
{
distance = 0.0f;
}
else
{
distance = distance * 0.5f + 0.5f;
}
m_distanceField[x,y] = distance;
}
}
5. Finally I convert the distance field values in to a colour, storing the result in the red colour channel (as alpha in dxt compression for consoles can get screwed up).
I'm getting image results like this though:
If you zoom in on it you'll see that the pixel spread is nothing like what the valve people are getting...
Any ideas?