Sign in to follow this  

[Math] Classes of functions with specific product integration property

This topic is 4288 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Does anyone of you know if there are any known sets of real-valued continous functions that for any two memebers g(x) and h(x) satisfy the following? That is, I'm looking for different sets of functions which, when integrated in product, yield the same result as when integrated separately and then multiplied. I'm curious since in one of my color algorithms that are under development I have found that if I could approximate my current functions with functions of such a class the calculations would be simplified tenfold. What are your thoughts on this?

Share this post


Link to post
Share on other sites
If the functions g and h are normalized, then replacing the = with <= gives you a special case of Holder's inequality with p = q = 1. Before moving onto this it might be helpful to look at the p = q = 2 case, known as the Cauchy-Schwartz inequality, which is easier to think about because it is just a continuous version of the familiar inequality involving the dot product |a.b| <= |a||b|.

The Cauchy-Schwartz inequality is an equality whenever a and b are parallel, or linearly dependent, and this can be extended to a similar concept for continuous functions.

For the p = q = 1 case, something similar holds except using a different norm, and unfortunately I believe it's only an equality when g and h are constant.

Share this post


Link to post
Share on other sites
@sQuid
Thanks for your reply. I'm not sure if this will help me but I will certainly look into it. I've only had a crach course in L2 spaces and I'm not familiar with the general properties of Lp spaces. I'll look into this before I get back.

Share this post


Link to post
Share on other sites
@etothex
Yes, I've seen the quite elegant solution to the Gaussian integral a few times but I didn't know about this Fubini's theorem. On the other hand I always thought the part where the integral is squared was a bit lofty but now I see that it is well founded. [smile]

So basically it states that the product of two integrals is equal to the product of the integrands integrated with respect to the product of their measures. This under the assumption that the product integral is absolutely convergent. Does this always hold? Thanks for the reply.

[Edited by - staaf on March 3, 2006 2:21:19 AM]

Share this post


Link to post
Share on other sites
Another approach:
All continuous functions on a closed interval can be approximated by uniform-sized step functions under the L^n (0 < n < infinity) norms.

Integrals of uniform step functions look a hell of alot like sums.

If a_n are your steps, and C the width of your steps, then
int f =~ C sum a_n

so
int fg = C^2 sum (a_nb_n)
while
int f int g = C^2 sum a_n sum b_n
= C^2 sum a_n b_n + 2 C^2 sum a_j b_i with j!=i
= int fg + 2 C^2 sum a_j b_i with j!=i
(note: this looks a hell of alot like varience from probability.)
or even better:
= 2 int fg - int fg + 2 C^2 sum a_j b_i with j!=i
= - int fg + 2 C^2 sum a_j b_i

so int f int g - int fg =~
- 2 int fg + 2 C^2 sum a_j b_i

Lets look at
2 C^2 sum a_j b_i
= 2 C sum (a_j C sum b_i)
C sum b_i =~ int g
= 2 C sum (a_j int g)
= 2 int g C sum a_j
=~ 2 int g int f

so int f int g - int fg =~
- 2 int fg + 2 int g int f
= 2 int g int f - 2 int fg
=> int f int g - int fg = 0
=> int f int g = int fg

strange. Didn't expect that. Probably made a mistake.

Share this post


Link to post
Share on other sites
@NotAYakk
What exactly are you doing in the first part of your derivation?
This is what I arrive at:

int f int g = C2(sum an)(sum bn) = C2 sum aibj

Compare to:

(a1 + a2 + a3)(b1 + b2 + b3) = a1b1 + a1b2 + a1b3 + a2b1 + a2b2 + a2b3 + a3b1 + a3b2 + a3b3

You have a 2 sneaking in there that I can't see where it's coming from.

I hope you are aware that if this would hold then every continous function defined on a closed interval that can be approximated by such step functions would satisfy the equation at the top. [smile] This is very strong and my intuition tells me it can't be so since it is easy to find functions of the given type which does not satisfy the equation.

Share this post


Link to post
Share on other sites
@staaf
I believe you meant int{g(x)h(x)dx} = int{g(x)dx}int{h(x)dx}?

Let f(x) and g(x) be polynomials (you can always use Taylor's). In this case rhs doesn't have x^1 term while lhs has (assuming that first term is x^0). Obviously we could set on the coeffs corresponding to x^0 term to zero on both f and g, but this would cause the same situation to propagate to the next term. Therefore I think it's probable that there does not exist such functions f and g.

This reasoning has atleast one loophole: a polynomial having terms from -inf to +inf (this would restrict our domain to [0,inf)). I didn't bother to think this further, but it is easy to get rid of the integrals by expressing the pol. as sum_n=-inf^inf{c_n*x^n} and integrate. From there it is easy to form the relations between the coefficients.

Share this post


Link to post
Share on other sites
Quote:
Original post by staaf
@NotAYakk
What exactly are you doing in the first part of your derivation?
This is what I arrive at:

int f int g = C2(sum an)(sum bn) = C2 sum aibj


What I was doing was utter crap. =) Did a square instead of a multiply. Which also explains why it looks like varience. ~_~

Quote:
I hope you are aware that if this would hold then every continous function defined on a closed interval that can be approximated by such step functions would satisfy the equation at the top. [smile] This is very strong and my intuition tells me it can't be so since it is easy to find functions of the given type which does not satisfy the equation.


*nod*, thought I had made a mistake. :)

Going to go and fix the math.

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
I believe you meant int{g(x)h(x)dx} = int{g(x)dx}int{h(x)dx}?

Yes. My mistake. I didn't think about that it wouldn't be the same thing so I used y to distinguish the two integrals.

Anyway, you are right about polynomials not being eligible as g(x) or h(x), but there are lots of other kinds of continous functions that behave differently when integrated. I think it is wrong to rule out the existence of such functions just based on the result from polynomials. Probably the functions that actually has this property are quite few in number, but nevertheless I'm curious about them.

Share this post


Link to post
Share on other sites
Quote:
Original post by staaf
Quote:
Original post by Winograd
I believe you meant int{g(x)h(x)dx} = int{g(x)dx}int{h(x)dx}?

Yes. My mistake. I didn't think about that it wouldn't be the same thing so I used y to distinguish the two integrals.

Anyway, you are right about polynomials not being eligible as g(x) or h(x), but there are lots of other kinds of continous functions that behave differently when integrated. I think it is wrong to rule out the existence of such functions just based on the result from polynomials. Probably the functions that actually has this property are quite few in number, but nevertheless I'm curious about them.


Then you should restrict your search to such functions that do not satisfy the Taylor's theorems conditions. They rule out all continuous functions with infinitely many continuous derivatives (derivatives may vanish of course).

Analysis may get quite involved...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Winograd

Then you should restrict your search to such functions that do not satisfy the Taylor's theorems conditions. They rule out all continuous functions with infinitely many continuous derivatives (derivatives may vanish of course).


Except for g(x) = h(x) = 0

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
Then you should restrict your search to such functions that do not satisfy the Taylor's theorems conditions. They rule out all continuous functions with infinitely many continuous derivatives (derivatives may vanish of course).
You're right. I didn't consider that. Thanks.

Quote:
Original post by Anonymous Poster
Except for g(x) = h(x) = 0
Obviously, but it'd be hard to approximate any other function with that.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by staaf
Quote:
Original post by Anonymous Poster
Except for g(x) = h(x) = 0
Obviously, but it'd be hard to approximate any other function with that.


You're just not trying hard enough then [smile]

Share this post


Link to post
Share on other sites
If I didn't do any mistakes then your functions should satisfy also following equation:

int{h(x)dx}/h(x) + int{g(x)dx}/g(x) = 1

You can arrive to the result yourself by taking derivatives on both sides:

h(x)g(x) = h(x)*int{g(x)dx} + g(x)*int{h(x)dx}

and first separating h to lhs and g to rhs.

I'm not sure if this is useful at all, but atleast gives some insight to the problem, provides a necessary identity and IMHO it looks beatiful ;).

Share this post


Link to post
Share on other sites
EDIT: There was an error in some equations. Lhs was ln(h(x), while it should have been ln(int{h(x)dx}).

OK I have a single set of solutions (there is perhaps more), from the previous formulation with a little work we can get:

h(x) / int{ h(x)dx } = g(x) / ( g(x) - int{ g(x)dx } )

Integrating this from the both sides gives:

ln(int{h(x)dx}) + C0 = int{ g(x) / (g(x) - int{ g(x)dx } )dx } (1)

OK, if I want an easy solution for the rhs I would demand that

g(x) = diff{ g(x) - int{g(x)dx} }

If this would be satisfied then the rule applied for the lhs could be used for rhs too.

So we get that g(x) = g'(x) - g(x) => g'(x) = 2g(x)
I came up only one function that satisfies this: g(x) = exp(2*x) (2)

Now substituting (2) to (1) we get

ln(int{h(x)dx}) + C0 = ln(exp(2*x)) + C1 => ln(int{h(x)dx}) = 2*x + C => int{h(x)dx} = A*exp(2*x), where A is some arbitrary constant.

Thus g(x) = h(x) = exp(2*x) will do. Also h(x) = -2 * exp(2*x) and g(x) = 2*exp(2*x) will do and so on...

Unfortunately this is not very useful for approximation and even more unfortunately this pretty much voids my Taylor polynomial argument :(

Generally h(x) = diff(exp( int{ g(x) / (g(x) - int{ g(x)dx } + C)dx } ),x), so the trick is to be able to calculate the outer integral. Sometimes it is easy when C is 0, but this doesn't always yield a correct solution. Note, that i'm not sure if the C is only constant that has to be introduced. They appear from the integrations.

[Edited by - Winograd on March 6, 2006 2:49:48 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Winograd
If I didn't do any mistakes then your functions should satisfy also following equation:

int{h(x)dx}/h(x) + int{g(x)dx}/g(x) = 1

...

I'm not sure if this is useful at all, but atleast gives some insight to the problem, provides a necessary identity and IMHO it looks beatiful ;).

I believe it is very useful. It states a reasonable condition to satisfy the original equation.
Quote:
Original post by Winograd
Thus g(x) = h(x) = exp(2*x) will do. Also h(x) = -2 * exp(2*x) and g(x) = 2*exp(2*x) will do and so on...

Yes, and this can be generalized to all functions

g(x) = A*exp(p*x)
h(x) = B*exp(q*x)

as long as

1/p + 1/q = 1 [1] and (A, B, p, q != 0)

since for g(x)

int{g(x)dx}/g(x) = int{ A*exp(p*x) dx } / A*exp(p*x) = (A/p)*exp(p*x) / A*exp(p*x) = 1/p

and the same goes for h(x).

Moreover for the original equation:
If we use the same g(x) and h(x) as above we have at the left hand:

int{g(x)h(x)dx} = int{A*B*exp((p+q)*x)dx} = (A*B)/(p+q) * exp((p+q)*x)

and at the right hand:

int{g(x)dx}int{h(x)dx} = (A/p * exp(p*x))(B/q * exp(q*x)) = (A*B)/(p*q) * exp((p+q)*x)

Now if we rearrange [1] we get that p+q = p*q, and by substituting this into the left or right hand side, the equation is satisfied.

Still, as you say, a single A*exp(p*x) is not very useful for approximation. A linear combination of such functions would do better. Hmm...

Share this post


Link to post
Share on other sites
Spotted the error today. Lhs was missing an integral. I corrected it to my previous post.

Using Maple I found that when g is a first order polynomial h has the following solution:

g(x) = a + b*x
h(x) = -2*(a+b*x)*exp((-ln(-2*a-2*b*x+2*a*x+b*x^2)*(b^2+a^2)^(1/2)+2*b*arctanh((-b+a+b*x)/(b^2+a^2)^(1/2)))/(b^2+a^2)^(1/2))/(-2*a-2*b*x+2*a*x+b*x^2)





Note also that the condition or identity I gave earlier isn't accurate. It is satisfied by g and h for which the original equation isn't satisfied. It is a necessary but not sufficient condition.

Share this post


Link to post
Share on other sites

This topic is 4288 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this