Reconstructing functions using basis functions

Started by
7 comments, last by Emergent 12 years, 5 months ago
I'm trying to de-construct a function, and then reconstruct it using basis functions.

c[n] = Integral{ basisFunc[n](x) * OriginalFunc(x) }

However, I've only successfully managed to do this with step basis functions.

If I use others like below, then I always get zeros on my reconstruction where the basis functions are zero.
[attachment=5984:Untitled-2.jpg]


This makes sense because the integral used to de-construct the original function with the basis functions multiplies the two. So I figured I would need to have the basis functions overlap. But if I overlap the sawtooth functions like below,
[attachment=5985:Untitled-3.jpg]


Then I don't meet the criterion that the functions are now orthonormal, as multiplying any 2 basis functions together wont equal 0.

So I guess my question is, how do I use things like Triangles, sin and other shapes in my basis functions correctly?
Advertisement
Ok, so you want to decompose the functions with a inner product (the integral in this case). For that to work you need an orthonormal basis. You are enforcing orthogonality by having disjoint supports for the basis functions. But now you have the problem, that the basis functions actually don't form a base for the function space you want to use it for. Basically what you get now, is a projection of your function into the space that has sawtooth functions as base functions.

Note that base functions don't need to have disjoint support. One of the cases you mentioned (sinus functions) actually leads to the fourier transform and that one works since for example the integral 0 to 2PI of sin(x)*sin(2x) is also zero, without the actual function over which you integrate being zero

I at the moment don't see how that would work with sawtooth functions though.
Your basis doesn't need to be orthonormal. That just makes things a little easier. Ultimately, what you want to do is solve the normal equations. This involves inverting the Gramian (matrix of inner products) of your basis functions, which is the identity matrix in the case of an orthonormal basis.

OK, here's the idea. You have a given function f, which you want to approximate using a linear combination of the given basis functions f1,...,fn. That is, you want to solve,

f ~= c1 f1 + ... + cn fn

for c1,...,cn in a least-squares sense. (Naturally, if f does lie in the span of f1,...,fn, then the equation will hold exactly.) Taking the inner product of both sides of this equation with respect to each of your basis functions gives you the normal equations (Here <a,b> denotes the L2 inner product of two functions a and b),

<f1, f1> c1 + <f1, f2> c2 + ... + <f1, fn> cn = <f1, f>
<f2, f1> c1 + <f2, f2> c2 + ... + <f2, fn> cn = <f2, f>
...
<fn, f1> c1 + <fn, f2> c2 + ... + <fn, fn> cn = <fn, f>


or just

G c = b

where G is the Gramian of your basis, c=(c1,...,cn), and b=(<f1,f>,...,<fn,f>).
Emergent. Sorry, but I'm not sure how that helps me. I'm wanting to know how I can use different basis functions. Are you saying that I can just plug any basis function in?

- What effect does using different basis functions have? Is it just a case of picking one that looks to give the best results? such as using smooth basis functions for smooth functions, and maybe something like a sawtooth for functions that are linear like?

Do you know of any reading material that talks through this kind of stuff right from the basics?
Is there any suggested reading material that anyone can recommend, preferably a book?

Emergent. Sorry, but I'm not sure how that helps me. I'm wanting to know how I can use different basis functions. Are you saying that I can just plug any basis function in?


The functions f1,...,fn can be whatever functions you want, so long as they are square-integrable (i.e., if you integrate their square over the given interval, you get a finite number). If they're bounded and continuous, this is guaranteed. Things are simplest when the basis is linearly independent, which means that none of your basis functions can be written as a linear combination of the others. If they are not linearly independent, then the optimal coefficients will not be unique, and the matrix G will not be invertible. In that case you have redundant basis functions that can be thrown out. Whereas if they are linearly independent, then G will be invertible and you can just solve the system of equations the way you're used to, e.g. by Gaussian elimination.

An introductory linear algebra book like this one (that's the one I used) should help. Books with titles like "An Introduction to Hilbert Spaces" would help too -- maybe this one? (Though I can't vouch for it personally, it does have a good rating.)

The basic idea is just that you can think of functions as vectors, and the integral of the product of two functions as their inner product. Once you have those things -- vectors and inner products -- you can treat functions more-or-less the same as you would vectors in n dimensions. Metaphors from 3 dimensions become useful for understanding the things you do to functions. You can draw geometric pictures, with functions being "points" in some space, to help yourself work things out.

What makes a good set of basis functions? As you noted, it depends on the data. One approach is to collect a bunch of data, and to then determine what basis functions would best approximate it. That's what Principal Component Analysis does.
Thankyou Emergent. I shall pick up that book. Do you also know of any books that focus on signal Analysis and reconstruction? There are lots out there on the Fourier series, but I was wanting something that starts from the beginning, and then builds up to the more complicated fourier series/ transform?
I would try to learn Linear Algebra first, if you don't have a good handle on it. You can then think of functions as vectors (you can add them, subtract them and scale them, so they are vectors) with an internal product given by the integral of the product. Approximating a function as a linear combination of functions from some basis is the same operation as orthogonal projection to a linear subspace. In the case of Fourier analysis, you have a sequence of orthogonal functions and you can consider the sequence of linear subspaces generated by the first n functions. The projections of a particular function on those linear subspaces are the approximations given by Fourier analysis.

I find it very helpful to think in these terms. Unfortunately, the only way to learn Linear Algebra well that I know of is going to a first-year class in college. But perhaps there are good self-teaching materials I am not aware of.

I would try to learn Linear Algebra first, if you don't have a good handle on it. [...]
I find it very helpful to think in these terms. Unfortunately, the only way to learn Linear Algebra well that I know of is going to a first-year class in college. But perhaps there are good self-teaching materials I am not aware of.


Totally agree.

Maybe try MIT OpenCourseWare ? They've got lectures from Strang himself. He's a little bit of a nervous lecturer, but you could do a lot worse.

I also have heard good things about Khan academy .

If you do watch video lectures, I'd accompany them by doing some example problems and reading a text. The first chapters of Strang's book are a little heavy on Gaussian elimination, but I think it improves in the later chapters, and I'd still recommend it.

This topic is closed to new replies.

Advertisement