Questions about Sampling / Reconstruction filter

Started by
5 comments, last by Bummel 10 years, 7 months ago

So if we sample a signal, what we do is sample it at discrete locations, which gives us the digital representation of a signal.

Translating this to a "real-world" example, if I'm sampling a texture in my pixel shader, why do I need to use a reconstruction filter on the sampled signal to use this ? Isn't it enough to just sample it since we are in the world of the computer. Don't we just need a digital representation of the data ? If I sample a texture I can use e.g. a linear sampling function which coresponds to a triangle filter.

Does that mean that the signal is still discretely sampled but being reconstructed with a triangle filter ? The monitor is also outputting digital values, so why do we even need to reconstruct the continuous signal again ?

Also is it correct to think that we only need to reconstruct a signal when we want to resample it ?

Advertisement

The values of a texture, or any sampled signal, is only defined at the sample points. Let's say at integer points, so that the sample values are x[n] for n=0, 1, 2, 3 and so on. Whether this is a one-dimensional or two-dimensional signal is not relevant, the theory is still the same.

Now, let's say you want to sample your texture at n=1.25. Clearly this is not possible, since the texture does not have a sample at 1.25; only and 1 and 2. The reconstruction filter is necessary to reconstruct the sample value at n=1.25. A nearest samplig is simply a rectangular filter with length 1 (to cover from n-0.5, and up to but not including n+0.5), or a triangular window with length 2 as you already mentioned. You can have even have more complex filters with better theoretical reconstruction, up to perfect reconstruction with an infinitely long sinc(n) filter.

It doesn't matter if you need to sample the reconstructed texture again at discrete points; if you sample it between the sample points, you need to reconstruct the signal at that point in order to get the texture color at that point between sample points.

I see....I guess what I'm having a problem with is making the connection between the theory and what's actually happening in the graphics pipeline.

So basically when I have my interpolated texture coordinates for an arbitary pixel e.g. float2(0.2f, 0.7f) that means that I have to reconstruct the signal with a filter to then resample at those coordinates, correct ?

Yes, the reconstruction filter recreates the original image (which is, say, the real world scene that was subsequently sampled by a digital camera) between the sample points. If you sample between sample points, then reconstruction is necessary to obtain a sample value at that point. Different reconstruction filters simply apply different models to how the reconstructed sample values are approximated.

To add to Brother Bob's explanation...this process is generally known as resampling. It's actually the same exact process that would happen if you resize an image in Photoshop. The basic steps go something like this:

1. You have some continuous signal

2. You discretely sample that signal

3. You use a reconstruction filter to re-create a continuous signal

4. You sample the reconstructed continuous signal at different sample points

The one thing to keep in mind is that steps #3 and #4 usually happen at the exact same time in computer graphics. You don't reconstruct the entire continuous signal, you just know where you want to re-sample and so you reconstruct the continuous signal at that exact sample point. This reconstruction then becomes the value you store in your new discrete representation.

Another question related to the same topic if I may...:

Seeing as rasterization is inherently a sampling task and the reason why geometric aliasing exists, does that mean that when using raytracing to compose a scene this form of aliasing does not exist ?

Raytracing is as much a discrete sampling task as rasterization: you trace a ray and sample discretely at the intersection point. Means when tracing a single ray per screen pixel you'll get the very same aliasing artifacts as if you had rasterized the scene.

This topic is closed to new replies.

Advertisement