# 2D Scaling

This topic is 4310 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I'm quite new to graphics programming, so please forgive me if my question sounds stupid. I'm trying to implement a 2D scaling routine that handles any scaling factor. Scaling above factor 0.5 is fine (I just do simple bilinear interpolation to smooth things out), but smaller than that, problems occur. For example, when scaling with factor 0.2, for 1 destination pixel there are 5 x 5 corresponding source pixels. How do I interpolate them?

##### Share on other sites
You could use mipmaping I think.

The whole concept is called minification, and there are a number of algorithms.

##### Share on other sites
Well, you interpolate them in exactly the same way you always do - weight the source pixels by the proportion of the resulting area they occupy.

However, you really can't cram much information into a small space, no matter how hard you try - when scaling down, some quality needs to be sacrificed. If bilinear filtering is totally unsatisfactory, you probably won't find anything too much better.

If performance isn't an issue, there are many other filters available. A bicubic filter will preserve a little more detail, but don't expect magic. 'Edge-preserving' scales exist, which can give the illusion of detail presrevation, although the result is less faithful to the original than a bilinear/bicubic.

You may want to Google 'texture min-filtering' for more general information.

Regards

##### Share on other sites
Quote:
 Original post by TheAdmiralWell, you interpolate them in exactly the same way you always do - weight the source pixels by the proportion of the resulting area they occupy.

The problem is I don't know what weight a pixel should have in this situation. Using the example in my original post, should each of the 25 pixels weigh 1/25? Or should it be a Bartlett filter or something else?

##### Share on other sites
Imagine overlaying, on to the existing image, a grid that has one-fifth the resolution in each dimension. The grid-blocks respresent the pixels on the new image. Then each pixel in the source image will lie in one, two or four of these blocks.
In a bilinear filter, each source pixel should contribute to every block that it appears in, according to the proportion of that block's area it occupies.

In general, this is a non-trivial calculation, but if you are scaling an image down by a dual factor of 5 and the source image's dimensions are both multiples of 5, then the blocks will map perfectly to 5x5 areas in the source, so indeed, each pixel contributes 1/25 of the colour of its (unique) destination pixel.

Of course, this will kill a lot of the entropy - blending 25 pixels into one - but the destination image is one twenty-fifth of the area, so there's nothing more you can hope to achieve.

If you are coding such a filter yourself, you could iterate over the source pixels, placing the data into the destination, but I feel it is more intuitive to do things the other way around: iterate over the destination image, determining which of the source pixels contribute to each dest-pixel, then, for each mapping, further calculate the area of intersection, before taking the weighted sum.

Regards

1. 1
Rutin
23
2. 2
3. 3
JoeJ
20
4. 4
5. 5

• 9
• 33
• 41
• 23
• 13
• ### Forum Statistics

• Total Topics
631745
• Total Posts
3002004
×

## Important Information

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!