# Checking Relative Closeness of a Set of Pixels to Another

This topic is 2802 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

A while ago I posted a topic about DXT compression.
http://www.gamedev.n...sion-algorithm/

I took a break from the DXT converter to port my engine to iPhone and work on the 2D side of it in preparation for a game I will be making soon, but now I am taking a break from that and going back to DXT (I always have multiple tasks in parallel so I can stop working on one when I get tired of it).

Firstly, let me say I have made leaps and bounds. The concept is the same (2 layers of linear regression) but I fixed the edge cases causing a lot of artifacts and generally refined the algorithm for greater accuracy.
Old:
MSE: 5.33

New:
MSE: 5.24

Aside from the higher accuracy, most artifacts (as pointed out by Syranide) are gone, and the difficult cases such as the top of her hair where it becomes white for one pixel, then green, are handled more appropriately. And the 5.24 image was generated in half the time of the 5.33 one.

I am very close to TheCompressonator, which gives 4.92. But during my research into the previous artifacts I encountered a problem I don’t seem to be able to solve.

I compressed a large image, went to a 4×4 section that was high in artifact content (this was before my updates; the image compresses perfectly now) and saved it as its own 4×4 image so I could run some exhaustive tests on it.

One of my tests was to simply brute-force test every possible combination of endpoint colors and keep the best match.

Even by doing this I could not get as low as ATI on the MSE scale.
I am using the exact same weights as they are and I checked every possible combination, including the colors ATI selected.
But for some reason I rejected those in favor of other colors that are very close, but apparently not enough.

So finally, here is my actual question: What is the best way to check if one set of colors is a better match for a given image than another set of colors?
This question is not as straight-forward as it seems.
MSE and PSNR won’t help, because if you think about it carefully you will realize that the goal is not to determine the actual quality of the image but simply to say that the quality is higher compared to another set of colors.
I did implement PSNR anyway, because one could conclude that the best match is the one with the highest PSNR. My results were exactly the same as just doing my standard test, which makes sense, because as long as the colors are all going through the same algorithm/test, a larger difference between color values will always result in a proportional gap in the end.

It is the same concept as not using square root to compare distances, whereas square root is required to get the actual value of the distance.

So here is what I am doing:

 /** * Gets the amount of error between 2 colors. * * \param _bColor0 Color 1. * \param _bColor1 Color 2. * \return Returns the amount of error between 2 colors. */ LSE_INLINE LSFLOAT LSE_CALL CImage::GetError( const LSI_BLOCK &_bColor0, const LSI_BLOCK &_bColor1 ) { /*return ::fabsf( ::powf( _bColor0.s.fR - ::powf( _bColor1.s.fR, 2.2f ) ) * 0.3086f + ::fabsf( ::powf( _bColor0.s.fG - ::powf( _bColor1.s.fG, 2.2f ) ) * 0.6094f + ::fabsf( ::powf( _bColor0.s.fB - ::powf( _bColor1.s.fB, 2.2f ) ) * 0.082f;*/ return ::fabsf( _bColor0.s.fR - _bColor1.s.fR ) * 0.3086f + // This is the test being used. ::fabsf( _bColor0.s.fG - _bColor1.s.fG ) * 0.6094f + ::fabsf( _bColor0.s.fB - _bColor1.s.fB ) * 0.082f; /*return ::fabsf( _bColor0.s.fR - _bColor1.s.fR ) * 0.30f + ::fabsf( _bColor0.s.fG - _bColor1.s.fG ) * 0.59f + ::fabsf( _bColor0.s.fB - _bColor1.s.fB ) * 0.11f;*/ /*return ::fabsf( _bColor0.s.fR - _bColor1.s.fR ) * 0.3333f + ::fabsf( _bColor0.s.fG - _bColor1.s.fG ) * 0.3333f + ::fabsf( _bColor0.s.fB - _bColor1.s.fB ) * 0.3333f;*/ }

I have 4 total colors.
I loop over the 16 source colors and use the above function to find out to which of my 4 colors each is closest.
Once a color is found to “belong” to one of my 4 sample colors, I then use the above function to get the actual amount of error for that color.

Add all of these error values to get the total error for that block.
The one with the lowest error should be the best block, yes?

So why did my routine check the best possible colors (as ATI has found) and skip them?
What would be a better idea for this? Any ideas?

L. Spiro

##### Share on other sites
I remember that original thread (its bookmarked at work somewhere).

This might be a bit 'out there' but I've always thought that DXT errors could be minimized on a perceptual level by allowing _more_ error and spreading it out to neighboring blocks (much like how tone mapping works). At least for RGB images (game textures, photographs etc), probably not as ideal for normal maps etc.

I would imagine this would need to function like a physics simulation of a fluid surface (nodes with high error need their error spread to neighboring cells until the 'simulation' settles on an ideal amount of error spread out, that smooths out the error peaks and shoots for a global maximum amount of error).

##### Share on other sites
MSE: 5.02

All artifacts gone.
0.1 difference from TheCompressonator.

MSE: 4.92 (ATI)

While the MSE may be higher on mine, my result is better. ATI got some areas better, but most of the differences are distracting in their result.
The I in FIGHT is one example.

So what did I change?

The error was in how I graded the closeness of the 4×4 block to the original. The values need to be weighted after being squared and normalized (and in the above function they were not squared at all).
I also added a small penalty for maximum error.

I will make a new post on my site after I make a few more improvements that might give me that 0.1 I need to tie.

I am also considering doing similar to what Zoner suggested. Perhaps making a pass over the image to gather information as which channels are more important and adaptively weighting them, or check luminance and add it to my calculation for error metrics, etc.

And maybe trying to check neighbors.

The CEO/CTO of my company (who talks often at GDC, CEDEC, etc.), said my tool (if I make it into a command-line tool) would be more useful than others even if my MSE is barely higher, if I provide better/specialized support for normals and some other kinds of special image formats.
In-house we have images that contain all kinds of data. One channel might be specular power, another reflectency, etc.
It would be useful to allow artists to specify that one channel is more important than the others and my design facilitates that easily.

He also suggested talking about it at GDC, but I don’t know that it is special enough for that unless I can beat ATI by a large margin.

Well I still have a few tricks that definitely will increase the quality and some more ideas to test and find out, so we will see.

L. Spiro

Epic-or!

Looks guud.

##### Share on other sites
Maybe you will find Perceptual Diff interesting/useful?

##### Share on other sites
Thank you for the continued replies but I have found out where I was going wrong. You can read about it here:
http://lspiroengine.com/?p=312

It covers more than just my proprietary 2-layer linear regression algorithm, and includes some insights that are important for every game developer to consider, especially near the bottom, so it is worth the read even if you are not particularly interested in my algorithm.

It is quite a long post so I understand fully if you wish to read the minimum amount that would be of interest, so I suggest reading only the following sections:
Checking Image Quality
The GPU Factor

In particular, “The GPU Factor” has information about which it appears very few studios/people are aware, and this is something I want to bring to light as much as I can.

L. Spiro

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 15
• 21
• 23
• 11
• 25