Jump to content
  • Advertisement
Sign in to follow this  
soconne

Bilinear Interpolation bad for down sizing images

This topic is 4466 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm using bilinear interpolation to sample images when they're englarged and the results look good but when I try and down size an image using bilinear interpolation it seems more pixellated than it should be. Is there a better algorithm for down sizing images?

Share this post


Link to post
Share on other sites
Advertisement
To tell you the truth, I don't remember the rule of thumb - I can tell you though, for resizing larger, bilinear is *awful*. The sucker is only piecewise-linearly (1st derivative) continuous. Bicubic is much better for sampling up.

I thought the rule of thumb was bicubic for up, bilinear for down...
If anything, I recall bilinear being rather "soft". Meaning you lose a little detail. If viewing at 1:1, you could try bicubic (but it may be overkill) I seem to recall it giving sharper results (but if you're planning on viewing the image other than 1:1, the detail is shattered through filtering.

-Michael g.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
yes, just use pixel averaging. i didn't even think bilinear interpolation worked for downsampling images due to the reverse mapping that you have to do.

Share this post


Link to post
Share on other sites
Pixel averaging is about the best you can do for reducing image size.
Most other algorithms (including bilinear interpolation) only work decently for sizes down to around 50%.

Share this post


Link to post
Share on other sites
@Extrarius:

Best in what sense? I believe sinc kernel or in practice windowed sinc (e.g., Lancoz kernel) would be far better in the sense of image quality. Although, bicubic should be good enough for most purposes, and sufficiently fast to calculate. Bilinear is fast, and should not produce significant pixelating if applied correctly (unless, the image contains strong high frequency components).

Soconne, what sort of images are you processing? Natural or synthetic? Bilinear might not be enough for diagrams and similar.

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
@Extrarius:

Best in what sense? I believe sinc kernel or in practice windowed sinc (e.g., Lancoz kernel) would be far better in the sense of image quality. Although, bicubic should be good enough for most purposes, and sufficiently fast to calculate. Bilinear is fast, and should not produce significant pixelating if applied correctly (unless, the image contains strong high frequency components).

Soconne, what sort of images are you processing? Natural or synthetic? Bilinear might not be enough for diagrams and similar.
Best in the sense that everything I've read (not much) says it produces better looking images and my experience (not a whole lot, but enough for me to agree) confirms that. It makes sense to me, because all the pixels in an area are equally important as long as you ignore 'shape' in the picture (and don't do something like detect edges and attempt to preserve them). I don't know if any technical measure of error would indicate it in any way superior, but IME error measurements don't often correlate with percieved error.

If you reduce an image down to 1/3 (~33%) original size, for example, bilinear interpolation is only taking into account the nearest 4 pixels (the 2x2 square around the sample point), so you're completely ignoring information from 5 pixels out of 9! Pixel averaging, on the other hand, takes a 3x3 block of pixels and weights them equally (as, ignoring 'shape', they are).

If you're doing a non-integer reduction in size (not 1/2, 1/3, etc), it might help to 'supersample' the area using bilinear interpolation or the like to properly account for locations with decimal portions and then average those results, but when the reduction averages an integer number of pixels exactly, simple averaging works best IME.

For resampling with the intention of enlarging a picture, my experience says the algorithms you mention, such as Lanczos resampling, are definitely superior.

Share this post


Link to post
Share on other sites
Quote:
Original post by Extrarius
Quote:
Original post by Winograd
@Extrarius:

Best in what sense?


Best in the sense that everything I've read (not much) says it produces better looking images and my experience (not a whole lot, but enough for me to agree) confirms that. It makes sense to me, because all the pixels in an area are equally important as long as you ignore 'shape' in the picture (and don't do something like detect edges and attempt to preserve them).


But all the pixels in an area are not equally important, even if you neglect the shape (which is usually done). Surely, pixels further from the center pixel have less importance than the pixels closer the center pixel? In 3x3 kernel the corner pixels marked as 'a' below

aba
bxb
aba

are further from the center pixel 'x' than pixel 'b'. Doesn't it feel natural that 'x' get larger weight than 'b' and 'a' gets smaller than 'b'?

Quote:
Original post by Extrarius
I don't know if any technical measure of error would indicate it in any way superior, but IME error measurements don't often correlate with percieved error.


I doubt any measure based on error energy (or similar) would rate 3x3 averaging kernel quite high. But as you say, the measurements don't always correlate well with perceived quality. Sometimes slightly aliased image looks better (i.e., sharper) than mathematically accurately downsampled version.

EDIT:
In fact, many photographers seem to like the X3 imaging sensor designed by Foveon although it produces visible aliasing, but the images tend to look sharper (because of aliasing). Just look at this image provided by Foveon. Especially look at the stickers in the window of the boat. The one with label '1998' look especially appalling.

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
Quote:
Original post by Extrarius
Quote:
Original post by Winograd
@Extrarius:

Best in what sense?


Best in the sense that everything I've read (not much) says it produces better looking images and my experience (not a whole lot, but enough for me to agree) confirms that. It makes sense to me, because all the pixels in an area are equally important as long as you ignore 'shape' in the picture (and don't do something like detect edges and attempt to preserve them).


But all the pixels in an area are not equally important, even if you neglect the shape (which is usually done). Surely, pixels further from the center pixel have less importance than the pixels closer the center pixel? In 3x3 kernel the corner pixels marked as 'a' below

aba
bxb
aba

are further from the center pixel 'x' than pixel 'b'. Doesn't it feel natural that 'x' get larger weight than 'b' and 'a' gets smaller than 'b'?

But a downsampled pixel is not a point, it is an area. And thus anything that falls within that area from the image before downsampling should get equal weight. If something is entirely within the area, there is no concept of closer to the area or farther away from the area. The center of the area is irrelevant.

Now there will be subpixels that are partially within one superpixel and partially in another (unless as Extrarius said, we stick to integer reductions), so those pixels will need to be weighted accordingly. But aside from that, weighting shouldn't be done.

Quote:
Original post by Winograd
Quote:
Original post by Extrarius
I don't know if any technical measure of error would indicate it in any way superior, but IME error measurements don't often correlate with percieved error.


I doubt any measure based on error energy (or similar) would rate 3x3 averaging kernel quite high. But as you say, the measurements don't always correlate well with perceived quality. Sometimes slightly aliased image looks better (i.e., sharper) than mathematically accurately downsampled version.

EDIT:
In fact, many photographers seem to like the X3 imaging sensor designed by Foveon although it produces visible aliasing, but the images tend to look sharper (because of aliasing). Just look at this image provided by Foveon. Especially look at the stickers in the window of the boat. The one with label '1998' look especially appalling.

Based off of a quick check of their website, it looks like this sharpness has nothing to do with their method of downsampling, but their improved method of capturing all colors at every pixel. In fact, in cases where they do downsample, it seems that they imply that they merely average all the subpixels together to get a single superpixel, though they don't say so explicitly (see here).

Share this post


Link to post
Share on other sites
Quote:
Original post by Agony
[...]But a downsampled pixel is not a point, it is an area. And thus anything that falls within that area from the image before downsampling should get equal weight. If something is entirely within the area, there is no concept of closer to the area or farther away from the area. The center of the area is irrelevant.[...]
Exactly my understanding of why it works well. You're not sampling at (X,Y), you're grouping the pixels from (X-w,Y-h) to (X+w,Y+h) and looking for a color value to represent all of those (sub)pixels in the new image. Each of those pixels contributes equally to the area in the original photo (because the pixels are all the same 'size'), and averaging preserves that relationship.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!