Advertisement Jump to content
Sign in to follow this  

What's it called when you resize an image to dimensions that are not a factor of the original image's dimensions?

This topic is 839 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I'm making a 2d game where there's a part of the interface that's supposed to display a tilemap. And I'm writing a map displayer that can be resized by the user. It's pretty obvious that if you have a map display that's twice as large as the map, you simply give each tile two pixels. And if you have a display that's half the size map you take every two pixels and average their color together. But what happens if your display is one third the size of the map, or 1.5 times the size of the map? Can someone provide me with a starting point to learn about this?

Share this post

Link to post
Share on other sites

Stretching? Sampling is a popular method for doing this, with some form of interpolation/filtering. Point aka near filtering means you pick the color of the closest pixel, linear filtering means you do some form of interpolation between the closest pixels.


Most engines, frameworks and/or graphics libraries will provide you with methods for this built in, so what are you using?


Edit: The easiest way to get good results might be to draw the map using a fixed predefined size onto a new image/buffer, then stretch that one.

Edited by DvDmanDT

Share this post

Link to post
Share on other sites

The technical term for this is "resampling", also known as "scaling". The basic process goes like this:


for each destination pixel:
   for each source pixel in some radius around the sample location:
       read source pixel
       compute reconstruction filter weight
       multiply source pixel with filter weight
       add result to running sum
   output sum as destination pixel


Like the above poster mentioned, most CPU-based graphics frameworks/libraries will have functionality for doing this. GPU's can also do it in hardware, but they only support two types of reconstruction filters (box filter AKA "point sampling", and triangle filter AKA "bilinear filtering").

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!