Jump to content

  • Log In with Google      Sign In   
  • Create Account

Bilinear filter


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 dAND3h   Members   -  Reputation: 214

Like
0Likes
Like

Posted 28 November 2012 - 02:19 PM

Can someone tell me what a bilinear filter actually does?

If I render my scene to a texture and render that into another render target which is smaller, it will become pixelated. If I apply bilinear filtering to the down sized texture, will it make it look less "pixelated" and blur the colours more?

Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 9280

Like
0Likes
Like

Posted 28 November 2012 - 02:36 PM

Suppose you reduce your texture by a factor of 3/4 in width and height. Then, there'll be 9 pixels in the downsized texture, for every 16 in the original one. How do you select which pixels get used in the smaller texture? Ideally, you want to use as much information as possible from the original 16 pixels, instead of just throwing it away.

The "no filtering" mode pretty much just selects the nearest pixel in the original texture to use in the smaller one, which means not only are you throwing 7 of the original pixels away (loss of information), you also get nasty spatial artifacts because which pixel is "nearest" depends on where on the texture the pixel is (aliasing).

On the other hand, bilinear filtering selects the four nearest pixels in the original texture, and computes a weighted average of those four pixels (which is responsible for the slight blurring effect). Then, you don't lose any information, and there are less aliasing artifacts. The weighting is done one dimension at a time. For instance:

[source lang="java"]+-----+| x || |+-----+[/source]
Here, the + are the pixels of the original texture, and x is the corresponding pixel of the smaller texture. What nearest-neighbor sampling will do is just take the top-right pixel and not look any further. On the other hand, the bilinear filter will give more weight to that pixel (as expected) but will still consider the other three. First, it'll take the 1D weighted average of the top left and top right pixels, based on the x-coordinate of x - so here, the top-left pixel will have weight roughly 30% and the-top right pixel 70%. Then, it does the same with the two bottom pixels. At this point, you have two new "colors" - one average at the top, and one at the bottom - average the two based on the y-coordinate of x, and that's your final color for x.

For the mathematically inclined, you are basically doing a bilinear interpolation over the four + with parameter x.

There are more sophisticated filtering methods, such as the Lanczos sampling filter, which generally produce better results (in most cases) but they are too expensive or complicated to use in realtime. Often, a bicubic filter is used when the smaller texture needs to remain sharp, as instead of a linear weighting scheme, it uses a polynomial approximation (a curve) with a stronger falloff which tends to reduce the blurring effect.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 dAND3h   Members   -  Reputation: 214

Like
0Likes
Like

Posted 28 November 2012 - 02:54 PM

Thanks you for the reply. What my goal is is "apply 2 bilinear filter stages to down-size the scene to 1/16th its size". Does not that make sense? It seems bilinear filtering doesn't actually resize a texture so I guess I am not understand that english.

Also, see this thread: http://www.gamedev.net/topic/598118-about-downsampling-upsampling/

I am also interested in the final questions answer, would you know how to answer on that?

#4 Bacterius   Crossbones+   -  Reputation: 9280

Like
0Likes
Like

Posted 28 November 2012 - 03:02 PM

Thanks you for the reply. What my goal is is "apply 2 bilinear filter stages to down-size the scene to 1/16th its size". Does not that make sense? It seems bilinear filtering doesn't actually resize a texture so I guess I am not understand that english.

No, bilinear filtering is just a sampling scheme to interpolate pixels on a 2D grid. To resize the texture, you need to multiply the old pixel position by your resizing factor (if you want to reduce the texture to 1/16 its size, you need to multiply the x-coordinate of each pixel by 1/4, and the y-coordinate by 1/4). You can use whatever sampling scheme to get your new pixels' colors then.

This is actually kind of tricky to implement right for all resizing factors. But luckily, other people have already done it!

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#5 dAND3h   Members   -  Reputation: 214

Like
0Likes
Like

Posted 28 November 2012 - 03:11 PM

Ok, so say I managed to render my scene and then again into a texture which is 1/16th the size. When rendering it into the 1/16th size render target, is that when I should use bilinear filtering, so not to lose information? Or should I render with "no-filtering" first to get a pixelated image and then use bilinear filtering>?

#6 Bacterius   Crossbones+   -  Reputation: 9280

Like
0Likes
Like

Posted 28 November 2012 - 03:13 PM

Ok, so say I managed to render my scene and then again into a texture which is 1/16th the size. When rendering it into the 1/16th size render target, is that when I should use bilinear filtering, so not to lose information? Or should I render with "no-filtering" first to get a pixelated image and then use bilinear filtering>?

Once you lose information you cannot get it back. You should use bilinear filtering (or better, bicubic, if time isn't a problem) all the way.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#7 AliasBinman   Members   -  Reputation: 432

Like
0Likes
Like

Posted 28 November 2012 - 03:22 PM

To go to 1/16th whilst ensuring you read every texel in the source with each one weighted equally. You can do this as two passes going to 1/4 each time with a single texture read or you do it 1 pass with 4 texture reads appropriately spaced so all 16 texels in that footprint all got sampled equally.

#8 MJP   Moderators   -  Reputation: 11751

Like
0Likes
Like

Posted 28 November 2012 - 03:26 PM

I have some long-winded explanations of point and bilinear filtering in an article on my blog, if you're interested in the nitty-gritty details. Although you'll probably want to read the previous article if you're not familar with signal processing basics.

#9 dAND3h   Members   -  Reputation: 214

Like
0Likes
Like

Posted 28 November 2012 - 03:30 PM

I guess I am not asking the correct questions because I don't still understand what I want. I don't know if you are familiar with the nvidia Quad.fxh files but I include those to use certain functions such as:

DECLARE_SIZED_QUAD_TEX(theTexture,theTextureSampler,"A16B16G16R16",1)
which creates a texture. Where the 1 at the end is the percentage size of the texture.

I am trying to do a bloom filter and I guess I am just not sure what size the textures should be.

#10 dAND3h   Members   -  Reputation: 214

Like
0Likes
Like

Posted 28 November 2012 - 03:31 PM

I have some long-winded explanations of point and bilinear filtering in an article on my blog, if you're interested in the nitty-gritty details. Although you'll probably want to read the previous article if you're not familar with signal processing basics.


Thanks I will have a look

#11 dAND3h   Members   -  Reputation: 214

Like
0Likes
Like

Posted 29 November 2012 - 10:12 AM

I got it sorted, thanks. I got another question though, What should the end result of a bright pass filter look like? Currently, My bright pass filter pixel shader looks like this:

float4 PS_BrightFilter(QuadVertexOutput In) : COLOR0
{
float4 rgba = tex2D(downSampledTextureSampler2,In.UV);
float luminance =  dot(rgba,float3(0.299f,0.587f,0.114f));

return rgba * (luminance * 1.0);//The 1.0 here was just me testing values with a multiplier

}
And the resulting rendertarget is semi transparent , the transparency only seems to be where there is not much light. This seems like the correct result and it makes sense with the name, I just want to make sure I am not misinterpreting it.
Here is an image to show the result I am getting:
Posted Image




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS