Regression testing is a possibility, but it requires screenshots from an existing system (kind of defeating the purpose of TDD), and the tests will fail even if there is 1 pixel different over 1 million pixels. Not very robust.
However, what possibilities lie in the field of signal processing? We can use various concepts of signal processing to compare a compressed image to the original image to get an idea of the level of information loss due to lossy compretion algorithms. What if we also employed neural networks in an attempt to recognize objects actually in the picture?
Again, another project that I will probably start with no intention of finishing.