Jump to content
  • Advertisement
Sign in to follow this  
dAND3h

Bilinear filter

This topic is 2119 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Can someone tell me what a bilinear filter actually does?

If I render my scene to a texture and render that into another render target which is smaller, it will become pixelated. If I apply bilinear filtering to the down sized texture, will it make it look less "pixelated" and blur the colours more?

Share this post


Link to post
Share on other sites
Advertisement
Suppose you reduce your texture by a factor of 3/4 in width and height. Then, there'll be 9 pixels in the downsized texture, for every 16 in the original one. How do you select which pixels get used in the smaller texture? Ideally, you want to use as much information as possible from the original 16 pixels, instead of just throwing it away.

The "no filtering" mode pretty much just selects the nearest pixel in the original texture to use in the smaller one, which means not only are you throwing 7 of the original pixels away (loss of information), you also get nasty spatial artifacts because which pixel is "nearest" depends on where on the texture the pixel is (aliasing).

On the other hand, bilinear filtering selects the four nearest pixels in the original texture, and computes a weighted average of those four pixels (which is responsible for the slight blurring effect). Then, you don't lose any information, and there are less aliasing artifacts. The weighting is done one dimension at a time. For instance:

[source lang="java"]
+-----+
| x |
| |
+-----+[/source]
Here, the + are the pixels of the original texture, and x is the corresponding pixel of the smaller texture. What nearest-neighbor sampling will do is just take the top-right pixel and not look any further. On the other hand, the bilinear filter will give more weight to that pixel (as expected) but will still consider the other three. First, it'll take the 1D weighted average of the top left and top right pixels, based on the x-coordinate of x - so here, the top-left pixel will have weight roughly 30% and the-top right pixel 70%. Then, it does the same with the two bottom pixels. At this point, you have two new "colors" - one average at the top, and one at the bottom - average the two based on the y-coordinate of x, and that's your final color for x.

For the mathematically inclined, you are basically doing a bilinear interpolation over the four + with parameter x.

There are more sophisticated filtering methods, such as the Lanczos sampling filter, which generally produce better results (in most cases) but they are too expensive or complicated to use in realtime. Often, a bicubic filter is used when the smaller texture needs to remain sharp, as instead of a linear weighting scheme, it uses a polynomial approximation (a curve) with a stronger falloff which tends to reduce the blurring effect.

Share this post


Link to post
Share on other sites
Thanks you for the reply. What my goal is is "apply 2 bilinear filter stages to down-size the scene to 1/16th its size". Does not that make sense? It seems bilinear filtering doesn't actually resize a texture so I guess I am not understand that english.

Also, see this thread: http://www.gamedev.net/topic/598118-about-downsampling-upsampling/

I am also interested in the final questions answer, would you know how to answer on that?

Share this post


Link to post
Share on other sites

Thanks you for the reply. What my goal is is "apply 2 bilinear filter stages to down-size the scene to 1/16th its size". Does not that make sense? It seems bilinear filtering doesn't actually resize a texture so I guess I am not understand that english.

No, bilinear filtering is just a sampling scheme to interpolate pixels on a 2D grid. To resize the texture, you need to multiply the old pixel position by your resizing factor (if you want to reduce the texture to 1/16 its size, you need to multiply the x-coordinate of each pixel by 1/4, and the y-coordinate by 1/4). You can use whatever sampling scheme to get your new pixels' colors then.

This is actually kind of tricky to implement right for all resizing factors. But luckily, other people have already done it!

Share this post


Link to post
Share on other sites
Ok, so say I managed to render my scene and then again into a texture which is 1/16th the size. When rendering it into the 1/16th size render target, is that when I should use bilinear filtering, so not to lose information? Or should I render with "no-filtering" first to get a pixelated image and then use bilinear filtering>?

Share this post


Link to post
Share on other sites

Ok, so say I managed to render my scene and then again into a texture which is 1/16th the size. When rendering it into the 1/16th size render target, is that when I should use bilinear filtering, so not to lose information? Or should I render with "no-filtering" first to get a pixelated image and then use bilinear filtering>?

Once you lose information you cannot get it back. You should use bilinear filtering (or better, bicubic, if time isn't a problem) all the way.

Share this post


Link to post
Share on other sites
To go to 1/16th whilst ensuring you read every texel in the source with each one weighted equally. You can do this as two passes going to 1/4 each time with a single texture read or you do it 1 pass with 4 texture reads appropriately spaced so all 16 texels in that footprint all got sampled equally.

Share this post


Link to post
Share on other sites
I have some long-winded explanations of point and bilinear filtering in an article on my blog, if you're interested in the nitty-gritty details. Although you'll probably want to read the previous article if you're not familar with signal processing basics.

Share this post


Link to post
Share on other sites
I guess I am not asking the correct questions because I don't still understand what I want. I don't know if you are familiar with the nvidia Quad.fxh files but I include those to use certain functions such as:

DECLARE_SIZED_QUAD_TEX(theTexture,theTextureSampler,"A16B16G16R16",1)
which creates a texture. Where the 1 at the end is the percentage size of the texture.

I am trying to do a bloom filter and I guess I am just not sure what size the textures should be.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!