• Advertisement
Sign in to follow this  

Bilinear Interpolation bad for down sizing images

This topic is 4228 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm using bilinear interpolation to sample images when they're englarged and the results look good but when I try and down size an image using bilinear interpolation it seems more pixellated than it should be. Is there a better algorithm for down sizing images?

Share this post


Link to post
Share on other sites
Advertisement
To tell you the truth, I don't remember the rule of thumb - I can tell you though, for resizing larger, bilinear is *awful*. The sucker is only piecewise-linearly (1st derivative) continuous. Bicubic is much better for sampling up.

I thought the rule of thumb was bicubic for up, bilinear for down...
If anything, I recall bilinear being rather "soft". Meaning you lose a little detail. If viewing at 1:1, you could try bicubic (but it may be overkill) I seem to recall it giving sharper results (but if you're planning on viewing the image other than 1:1, the detail is shattered through filtering.

-Michael g.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
yes, just use pixel averaging. i didn't even think bilinear interpolation worked for downsampling images due to the reverse mapping that you have to do.

Share this post


Link to post
Share on other sites
Pixel averaging is about the best you can do for reducing image size.
Most other algorithms (including bilinear interpolation) only work decently for sizes down to around 50%.

Share this post


Link to post
Share on other sites
@Extrarius:

Best in what sense? I believe sinc kernel or in practice windowed sinc (e.g., Lancoz kernel) would be far better in the sense of image quality. Although, bicubic should be good enough for most purposes, and sufficiently fast to calculate. Bilinear is fast, and should not produce significant pixelating if applied correctly (unless, the image contains strong high frequency components).

Soconne, what sort of images are you processing? Natural or synthetic? Bilinear might not be enough for diagrams and similar.

Share this post


Link to post
Share on other sites
For photographies, we applied this algorithm:
newsize = currentSize;
do {
downsize_by_averageing(newsize, newsize / 2)
newsize /= 2
} while (newsize > targetsize)
downsize_using_bicubic_splines(newsize, targetsize);

It works quite well, and is used on professional equipements (minilabs).

Regards,

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
@Extrarius:

Best in what sense? I believe sinc kernel or in practice windowed sinc (e.g., Lancoz kernel) would be far better in the sense of image quality. Although, bicubic should be good enough for most purposes, and sufficiently fast to calculate. Bilinear is fast, and should not produce significant pixelating if applied correctly (unless, the image contains strong high frequency components).

Soconne, what sort of images are you processing? Natural or synthetic? Bilinear might not be enough for diagrams and similar.
Best in the sense that everything I've read (not much) says it produces better looking images and my experience (not a whole lot, but enough for me to agree) confirms that. It makes sense to me, because all the pixels in an area are equally important as long as you ignore 'shape' in the picture (and don't do something like detect edges and attempt to preserve them). I don't know if any technical measure of error would indicate it in any way superior, but IME error measurements don't often correlate with percieved error.

If you reduce an image down to 1/3 (~33%) original size, for example, bilinear interpolation is only taking into account the nearest 4 pixels (the 2x2 square around the sample point), so you're completely ignoring information from 5 pixels out of 9! Pixel averaging, on the other hand, takes a 3x3 block of pixels and weights them equally (as, ignoring 'shape', they are).

If you're doing a non-integer reduction in size (not 1/2, 1/3, etc), it might help to 'supersample' the area using bilinear interpolation or the like to properly account for locations with decimal portions and then average those results, but when the reduction averages an integer number of pixels exactly, simple averaging works best IME.

For resampling with the intention of enlarging a picture, my experience says the algorithms you mention, such as Lanczos resampling, are definitely superior.

Share this post


Link to post
Share on other sites
Quote:
Original post by Extrarius
Quote:
Original post by Winograd
@Extrarius:

Best in what sense?


Best in the sense that everything I've read (not much) says it produces better looking images and my experience (not a whole lot, but enough for me to agree) confirms that. It makes sense to me, because all the pixels in an area are equally important as long as you ignore 'shape' in the picture (and don't do something like detect edges and attempt to preserve them).


But all the pixels in an area are not equally important, even if you neglect the shape (which is usually done). Surely, pixels further from the center pixel have less importance than the pixels closer the center pixel? In 3x3 kernel the corner pixels marked as 'a' below

aba
bxb
aba

are further from the center pixel 'x' than pixel 'b'. Doesn't it feel natural that 'x' get larger weight than 'b' and 'a' gets smaller than 'b'?

Quote:
Original post by Extrarius
I don't know if any technical measure of error would indicate it in any way superior, but IME error measurements don't often correlate with percieved error.


I doubt any measure based on error energy (or similar) would rate 3x3 averaging kernel quite high. But as you say, the measurements don't always correlate well with perceived quality. Sometimes slightly aliased image looks better (i.e., sharper) than mathematically accurately downsampled version.

EDIT:
In fact, many photographers seem to like the X3 imaging sensor designed by Foveon although it produces visible aliasing, but the images tend to look sharper (because of aliasing). Just look at this image provided by Foveon. Especially look at the stickers in the window of the boat. The one with label '1998' look especially appalling.

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
Quote:
Original post by Extrarius
Quote:
Original post by Winograd
@Extrarius:

Best in what sense?


Best in the sense that everything I've read (not much) says it produces better looking images and my experience (not a whole lot, but enough for me to agree) confirms that. It makes sense to me, because all the pixels in an area are equally important as long as you ignore 'shape' in the picture (and don't do something like detect edges and attempt to preserve them).


But all the pixels in an area are not equally important, even if you neglect the shape (which is usually done). Surely, pixels further from the center pixel have less importance than the pixels closer the center pixel? In 3x3 kernel the corner pixels marked as 'a' below

aba
bxb
aba

are further from the center pixel 'x' than pixel 'b'. Doesn't it feel natural that 'x' get larger weight than 'b' and 'a' gets smaller than 'b'?

But a downsampled pixel is not a point, it is an area. And thus anything that falls within that area from the image before downsampling should get equal weight. If something is entirely within the area, there is no concept of closer to the area or farther away from the area. The center of the area is irrelevant.

Now there will be subpixels that are partially within one superpixel and partially in another (unless as Extrarius said, we stick to integer reductions), so those pixels will need to be weighted accordingly. But aside from that, weighting shouldn't be done.

Quote:
Original post by Winograd
Quote:
Original post by Extrarius
I don't know if any technical measure of error would indicate it in any way superior, but IME error measurements don't often correlate with percieved error.


I doubt any measure based on error energy (or similar) would rate 3x3 averaging kernel quite high. But as you say, the measurements don't always correlate well with perceived quality. Sometimes slightly aliased image looks better (i.e., sharper) than mathematically accurately downsampled version.

EDIT:
In fact, many photographers seem to like the X3 imaging sensor designed by Foveon although it produces visible aliasing, but the images tend to look sharper (because of aliasing). Just look at this image provided by Foveon. Especially look at the stickers in the window of the boat. The one with label '1998' look especially appalling.

Based off of a quick check of their website, it looks like this sharpness has nothing to do with their method of downsampling, but their improved method of capturing all colors at every pixel. In fact, in cases where they do downsample, it seems that they imply that they merely average all the subpixels together to get a single superpixel, though they don't say so explicitly (see here).

Share this post


Link to post
Share on other sites
Quote:
Original post by Agony
[...]But a downsampled pixel is not a point, it is an area. And thus anything that falls within that area from the image before downsampling should get equal weight. If something is entirely within the area, there is no concept of closer to the area or farther away from the area. The center of the area is irrelevant.[...]
Exactly my understanding of why it works well. You're not sampling at (X,Y), you're grouping the pixels from (X-w,Y-h) to (X+w,Y+h) and looking for a color value to represent all of those (sub)pixels in the new image. Each of those pixels contributes equally to the area in the original photo (because the pixels are all the same 'size'), and averaging preserves that relationship.

Share this post


Link to post
Share on other sites
Use box area sampling for minification filter: great quality! Here is link to different filters, the last image is box area sampler minification to extremely small size, followed by bicubic magnification:

http://www.liimatta.org/fusion/filter/filter.html

The sourcecode is also available, currently compiles and works on Linux (various, tested lately on IA32/x86 and AMD64 variants), IRIX 6.5, Windows (32 and 64 bit) and MacOS X (only tested on Intel macs lately, tested on Power Macs previously but cannot guarantee it works out-of-the-zip :).

Codewise, it would look like this:

(sorry don't know the tags, so I just write plain inline here)

// --- test.cpp ---
#include <fusion/core/surface.hpp>
using namespace fusion::core;

int main()
{
// initialize the fusion::core component
core::init();

// create surface object, overload the pixelformat (*1)
surface* so = surface::create("test.jpg",pixelformat::argb8888);

// invoke the appropriate filters ;)
so->resize(10,10);
so->resize(400,300);
so->save("snap.jpg");

// cya surface object
so->release();
}

// --- end ---

Note above:

(*1) The pixelformat is very flexible, this one is just one built-in static pixelformat (a few common formats are pre-built into the library). Besides the most typical formats, floating-point is fully supported, compressed surfaces is supported (so you can store DXT compressed surfaces etc.), decompression of surfaces is naturally supported, mipmaps, cubemaps and so on are supported through the same interface. :)

Example:

surface* so = surface::create("koolstuff.dds"); // <-- note: no overload used

At this point we check if your IDirectSuperDevice20000 or OpenGL ICD supports compressed textures, if not, we decompress and dump the data with some rgb format that suits our purposes. If it does, we dump the contents of the surface as-is and go on our merry way with compressed textures. \:D/

Useful for other stuff aswell, example, filesystem:

stream* s = filesystem::create("test.bla");

Or,

stream* s = filesystem::create("foo/bar.zip/test.bla"); // !!!

Or,

surface* so = surface::create("foo/bar.zip/test.png"); // !!!

Because the filesystem is plugin based and modular, and the codec manager is also plugin based and modular, it's possible to mix and match the different plugins to do things like above. Since we can read jpg's, png's, whatever and can access zip files like folders, it automatically follows that we can read jpg off a zip file among other things. ;)

About image resizing, the resize filters implemented are reasonably high quality, worth a minute or two to check out.

Share this post


Link to post
Share on other sites
Quote:
Original post by Agony
But a downsampled pixel is not a point, it is an area. And thus anything that falls within that area from the image before downsampling should get equal weight. If something is entirely within the area, there is no concept of closer to the area or farther away from the area. The center of the area is irrelevant.


No, i'm afraid you're wrong. Downsampled pixel is a point. It is a sample of a bandlimited continuous image, just like the samples of the original image were. The downsampled pixel can, however, be estimated from the given area of the original image.

But, what makes you think that the 3x3 area contains all the information needed in order to recover the actual sample (i.e., pixel) of downsampled image? Perhaps one would need 7x7? Or NxN? But how about weighting? What I am trying to say is, that while your reasoning seems intuitive, it is flawed and over simplified. This is not to say that the method would not work well visually (although, this is subjective). In fact, simple averaging is probably more than adequate when considering its low computational complexity.

Quote:
Original post by Agony
Based off of a quick check of their website, it looks like this sharpness has nothing to do with their method of downsampling, but their improved method of capturing all colors at every pixel.


I would hardly call aliased image an improvement. Then again, this is subjective.

Share this post


Link to post
Share on other sites
Wow, this has turned into a very interesting discussion. My original intent was to simply sample a texture that tiled across a quad for painting purposes. I found that if the texture needed to be tiled, lets say 8 times, across the quad, if I used bilinear filtering to sample the texture at each point on the quad, the results had too much contrast between neighboring pixels.

But I just used simple averaging of the four neighboring pixels and it looks just fine. I did a comparison between simple averaging and the way OpenGL downsamples textures and the results are identical.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
lol i still dont get how bilinear interpolation can reduce images... my implementation of it is you basically map a point in the large image to a point in the small image. then you take 4 of it's nearest neighbors and do a bilinear interpolation on it (z = Axy + Bx + Cy + D). so if you were to say resample a really large image down to a really small one, each pixel in the small image is only sampling from 4 pixels... where if u use pixel averaging it will average many pixels. so that is probably why your bilinear interp is looking like crap... not enough sample points for larger images.
Y-Go

Share this post


Link to post
Share on other sites
I suggest everyone interested in the topic to view this comparison:

Comparison of different down sampling methods


It is interesting to see how theoretically same kernel results in drastically different results depending on the implementation. To my knowledge, triangle kernel of ImageMagick should be equal to bilinear of Photoshop (compare the results).

Especially compare the results for ImageMagick at the bottom of the page. Note that box antialiased image (that is, image filtered with averaging filter) is second worst of all tested. Only nearest neighbour was even worse. Although, one must remember that the exact implementation of those filters are not known (well the source code is available, but I'm lazy).

It is also worth noting that sinc and lanczos (the theoretically good filters) don't in fact remove aliasing as effectively as bicubic (or cubic). This is surprising, but it may be explained by low order kernel (in theory sinc kernel extends to infinity). This is understandable as some aliasing may please the eye. That is, we perceive aliased image slightly sharper than correctly antialiased image. However, one would expect kernel named 'sinc' to outperform at least bicubic in terms of reduced aliasing.

EDIT: The tests in the given web site were made with 20% resampling. This is much more demanding task than for example 50% or 25%. I did few tests of my own with the same test image and 50% resampling. Results were consistent with the ones given at the website.


@anonymous poster:

Bilinear is only good to 50% resize and it is questionable even then. Usually when resizing images, it is done in steps with simple kernels. Something like Emmanuel Deloget explained. For example, resizing to 25% could be done as two 50% down samplings. In this scheme lower order kernel is sufficient (such as bilinear).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement