Followers 0

# Bilinear filter

## 10 posts in this topic

Can someone tell me what a bilinear filter actually does?

If I render my scene to a texture and render that into another render target which is smaller, it will become pixelated. If I apply bilinear filtering to the down sized texture, will it make it look less "pixelated" and blur the colours more?
0

##### Share on other sites
Suppose you reduce your texture by a factor of 3/4 in width and height. Then, there'll be 9 pixels in the downsized texture, for every 16 in the original one. How do you select which pixels get used in the smaller texture? Ideally, you want to use as much information as possible from the original 16 pixels, instead of just throwing it away.

The "no filtering" mode pretty much just selects the nearest pixel in the original texture to use in the smaller one, which means not only are you throwing 7 of the original pixels away (loss of information), you also get nasty spatial artifacts because which pixel is "nearest" depends on where on the texture the pixel is (aliasing).

On the other hand, bilinear filtering selects the four nearest pixels in the original texture, and computes a weighted average of those four pixels (which is responsible for the slight blurring effect). Then, you don't lose any information, and there are less aliasing artifacts. The weighting is done one dimension at a time. For instance:

[source lang="java"]
+-----+
| x |
| |
+-----+[/source]
Here, the + are the pixels of the original texture, and x is the corresponding pixel of the smaller texture. What nearest-neighbor sampling will do is just take the top-right pixel and not look any further. On the other hand, the bilinear filter will give more weight to that pixel (as expected) but will still consider the other three. First, it'll take the 1D weighted average of the top left and top right pixels, based on the x-coordinate of x - so here, the top-left pixel will have weight roughly 30% and the-top right pixel 70%. Then, it does the same with the two bottom pixels. At this point, you have two new "colors" - one average at the top, and one at the bottom - average the two based on the y-coordinate of x, and that's your final color for x.

For the mathematically inclined, you are basically doing a bilinear interpolation over the four + with parameter x.

There are more sophisticated filtering methods, such as the Lanczos sampling filter, which generally produce better results (in most cases) but they are too expensive or complicated to use in realtime. Often, a bicubic filter is used when the smaller texture needs to remain sharp, as instead of a linear weighting scheme, it uses a polynomial approximation (a curve) with a stronger falloff which tends to reduce the blurring effect.
0

##### Share on other sites
Thanks you for the reply. What my goal is is "apply 2 bilinear filter stages to down-size the scene to 1/16th its size". Does not that make sense? It seems bilinear filtering doesn't actually resize a texture so I guess I am not understand that english.

I am also interested in the final questions answer, would you know how to answer on that?
0

##### Share on other sites
[quote name='dAND3h' timestamp='1354136098' post='5005071']
Thanks you for the reply. What my goal is is "apply 2 bilinear filter stages to down-size the scene to 1/16th its size". Does not that make sense? It seems bilinear filtering doesn't actually resize a texture so I guess I am not understand that english.
[/quote]
No, bilinear filtering is just a sampling scheme to interpolate pixels on a 2D grid. To resize the texture, you need to multiply the old pixel position by your resizing factor (if you want to reduce the texture to 1/16 its size, you need to multiply the x-coordinate of each pixel by 1/4, and the y-coordinate by 1/4). You can use whatever sampling scheme to get your new pixels' colors then.

This is actually kind of tricky to implement right for all resizing factors. But luckily, other people have already done it!
0

##### Share on other sites
Ok, so say I managed to render my scene and then again into a texture which is 1/16th the size. When rendering it into the 1/16th size render target, is that when I should use bilinear filtering, so not to lose information? Or should I render with "no-filtering" first to get a pixelated image and then use bilinear filtering>?
0

##### Share on other sites
[quote name='dAND3h' timestamp='1354137086' post='5005076']
Ok, so say I managed to render my scene and then again into a texture which is 1/16th the size. When rendering it into the 1/16th size render target, is that when I should use bilinear filtering, so not to lose information? Or should I render with "no-filtering" first to get a pixelated image and then use bilinear filtering>?
[/quote]
Once you lose information you cannot get it back. You should use bilinear filtering (or better, bicubic, if time isn't a problem) all the way.
0

##### Share on other sites
To go to 1/16th whilst ensuring you read every texel in the source with each one weighted equally. You can do this as two passes going to 1/4 each time with a single texture read or you do it 1 pass with 4 texture reads appropriately spaced so all 16 texels in that footprint all got sampled equally.
0

##### Share on other sites
I have some long-winded explanations of point and bilinear filtering in [url="http://mynameismjp.wordpress.com/2012/10/21/applying-sampling-theory-to-real-time-graphics/"]an article on my blog[/url], if you're interested in the nitty-gritty details. Although you'll probably want to read the [url="http://mynameismjp.wordpress.com/2012/10/21/applying-sampling-theory-to-real-time-graphics/"]previous article[/url] if you're not familar with signal processing basics.
0

##### Share on other sites
I guess I am not asking the correct questions because I don't still understand what I want. I don't know if you are familiar with the nvidia Quad.fxh files but I include those to use certain functions such as:

which creates a texture. Where the 1 at the end is the percentage size of the texture.

I am trying to do a bloom filter and I guess I am just not sure what size the textures should be.
0

##### Share on other sites
[quote name='MJP' timestamp='1354137965' post='5005081']
I have some long-winded explanations of point and bilinear filtering in [url="http://mynameismjp.wordpress.com/2012/10/21/applying-sampling-theory-to-real-time-graphics/"]an article on my blog[/url], if you're interested in the nitty-gritty details. Although you'll probably want to read the [url="http://mynameismjp.wordpress.com/2012/10/21/applying-sampling-theory-to-real-time-graphics/"]previous article[/url] if you're not familar with signal processing basics.
[/quote]

Thanks I will have a look
0

##### Share on other sites
I got it sorted, thanks. I got another question though, What should the end result of a bright pass filter look like? Currently, My bright pass filter pixel shader looks like this:

[CODE]float4 PS_BrightFilter(QuadVertexOutput In) : COLOR0
{
float4 rgba = tex2D(downSampledTextureSampler2,In.UV);
float luminance = dot(rgba,float3(0.299f,0.587f,0.114f));

return rgba * (luminance * 1.0);//The 1.0 here was just me testing values with a multiplier

}[/CODE]
And the resulting rendertarget is semi transparent , the transparency only seems to be where there is not much light. This seems like the correct result and it makes sense with the name, I just want to make sure I am not misinterpreting it.
Here is an image to show the result I am getting:
[img]http://s9.postimage.org/k7j6olfnz/bright_Pass.png[/img]
0

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account