Implementing "Improved Alpha Tested Magnification" [Early results!]

Started by
5 comments, last by OrangyTang 16 years ago
MK42 handily posted the link to Valve's graphics papers, and Improved Alpha Tested Magnification caught my eye. In particular I think it's be a nice alternative/replacement for my existing font rendering code, as it should allow much greater quality when scaling. Has anyone implemented this? It seems to be light on details as to how to create the distance field texture from the high res input, and I'm struggling with this bit:
Quote:For each output texel, the distance field generator determines whether the corresponding pixel in the high resolution image is "in" or "out". Additionally, the generator computes 2D distance (in texels) to the nearest texel of the opposite state. This is done by examining the local neighborhood around a given texel.
So I guess if we were generating a 64x64 distance field from a 2048x2048 binary input, the first step would be analyzing the relevant 32x32 pixel chunk for each output texel to determine the "in" or "out" state. Would this be just a simple average of whether those 32x32 pixels are solid or not? Or something more complicated than that? I'm going to have a crack at this tonight, but maybe someone who has already implemented this could let me know if I'm on the right track. [Edited by - OrangyTang on April 12, 2008 9:38:19 AM]
Advertisement
I've managed to hack together something which does the basic image processing, and the results are impressive.

This is the original, high resolution input image (300x300x1):


This produces the following low-res distance map (37x37, but scaled x8 for visibility)


And when drawn in game, with bilinear filtering and an alpha test of 0.5f, we get rather good quality output. This is drawn with a scale factor of 16, so we're actually drawing it twice as big as the original input image, but with only one eighth of the memory.



I'm not sure what's causing the artifacts at the bottom of the output - it might be that my preprocessing isn't quite correct, or I should be using a higher resolution input. And the preprocessing is very slow, so I'm going to try and fix that first, then see what the results are with even higher input images.
This is very interesting development!

I'm conceiving an idea of packing arbitrary images with an adaptation of this method, storing distance difference vectors to the actual image data in a low-resolution grid - this would enable almost per-pixel continuous LOD management and optimization of texture space by automatically taking into account detail frequency.

The pre-processing is feasible to do on D3D10 hardware.

Need to get cracking on this sometime soon :)

Niko Suni

This is one of those things thats seems like magic at first but so blindingly obvious when you see how its done you cant believe you didn't think of it already. Really cool...

About the wierd artifact at the bottom, i dont know what'd causing it, but as a test i took the lo-res image into photoshop and enlarged it by 500% and then did a brightness/contrast filter (-16% bright, 100% contrast) and the resulting image looks perfect.. so I think its a rendering problem, not a problem with the preprocessor.
Put the image upside down, or scroll it up by 50% (with wrap around) to see if the problem is caused by your preprocessing, or if it's just a problem with the input data (i.e., it can't be approximated any better).
Try changing the texture wrapping mode to clamp instead of wrap, that might fix up the problem at the bottom.
Cheers for the suggestions. I switched to using a higher res input (1kx1k) and the artifact at the bottom went away, so I think it was just a case of the low-res input not approximating that particular curve very well.

New picture over at my journal if anyone wants a look.

This topic is closed to new replies.

Advertisement