Thank you both.
I remember a topic from a few months ago in which I was accused of wasting my time reinventing wheels. It did take more time than I expected but it was certainly rewarding, and I was able to uncover some important facts that could be beneficial to a lot of people/companies when considering their image quality.
Mr. Gotanda was also not aware of the differences in how NVIDIA decodes their images, and by my calculations this special decoding method was very likely to have been invented specifically for the PlayStation 3. Had my company been aware of this difference before, the quality of their PlayStation 3 textures could have been improved.
The ATI results differ by more than 1 value in many places too, which suggests more than just truncation. I wish they would publish their decoding method. It’s not like it will help NVIDIA or anything, but it would help developers striving for better image quality.
If the ATI decompression method were exposed, then a “perfect” tool could be made to tailor to each of their decompression methods individually.
I found it quite interesting that different GPUs have different percentages for the interpolated colors, would never have guessed that.
I'm curious if it's something that could be reasonably improved by simply embedding different "color pairs" for each block in a texture, rather than necessarily generating a unique texture for each different hardware, as to be able to compensate at load time for the (three?) common PC hardware configurations. I imagine one could possibly even allow the algorithm to generate some unique blocks if the algorithm deems it a significant improvement (rather than just another color pair).
Or are the hardware differences really minor in practice and that the primary effect is perhaps only really observed in mathematical measurements and not perceptually?
As for AMD decoding, shouldn't it be quite easy to just generate a bunch of "hand coded" blocks with specific gradients and then look at what AMD outputs? (assuming you have an AMD card) ... it would seem to me like there can't be anything really complicated going on behind the scenes, that wouldn't be "easily" understood by just a bit of testing. Of course there may be differences between different models... EDIT: After looking at the nVidia-implementation, I take it back!
EDIT: Couldn't find any actual numbers, but it would be interesting if the percentages wasn't symmetrical, as one could then also exploit the order of the two colors as a further optimization.