Archived

This topic is now archived and is closed to further replies.

GaulerTheGoat

The truth about dy/dx notation.

Recommended Posts

Somebody posted on the meaning of dy/dx. So many people don''t get this - including me, maybe... Here is the truth, as I now understand it. This is mathematically advanced and has nothing to do with programming, BTW: The reason you are confused is because of the bad notation going on here. To understand this, erase from your brain the interpretation of dy/dx as the derivative of function y(x) with respect to x. Let y''(x)=lim h->0 {[y(x+h)-y(x)]/h}, be the usual definition of derivative but use y'' notation instead of dy/dx. Now, if y is a function of x, let dy be a function of x and an arbitrary number Ax (please imagine A is the Greek letter capital Delta,) such that dy=y''(x)Ax (call this equation:*). Ax can be any number, not just a infintesimal and likewise with dy. Now consider the case, y(x) = x (the identity function), then dx=x''Ax=Ax by direct susbstitution into equation (*) (since x''=1.) Hence, since dx=Ax, we can write dy=y''(x)dx. In this definition, dy and dx are arbitrary numbers (of course, dy depends on dx.) They are not infinitly small, they are not going to any limit, they are just two numbers whose quotient is y''(x). Therefore, the quotient dy/dx IS a fraction and can be treated as such. It DOES NOT equal 0/0, in fact, 0/0 is simply a notation for an indeterminate form; it has no numerical value or meaning. The problem is that some books and teachers define dy/dx as the limit of Ay/Ax as Ax->0 because it is kind of traditional to do so, but it is definitely confusing - right? Also, the dx at the end of an integral HAS NO VALUE!!! It tells you which variable of the integrand is to be integrated. It cannot cancel out with anything that looks like it because it HAS NO VALUE. It would be like canceling two happy faces or something. The reason it looks like you can cancel them
quote:
f(y)dy/dx=g(x) Integrate both sides with respect to x I(f(y)dy/dx dx)=I(g(x) dx) I(f(y)dy dx/dx)=I(g(x) dx) now the rate of change of x with respect to x is obviously 1 so I(f(y)dy)=I(f(x)dx)
in this problem is that the LHS is originaly being integrated with respect to x (hence the dx) but vivi obviously wanted to integrate it with respect to y. Hence he wants to make a change of variable. To do this, he must apply the chain rule (or Jacobian in higher dimensions.) The Jacobian in this case is dx/dy, and by multiplying THIS factor onto the integrand, IT cancels the dy/dx, leaving only f(y), now integrated with respect to y (hence the dy.) That is it! Bad notation. Hope this helps people!

Share this post


Link to post
Share on other sites
/me digs out his 2 cents.

The dy/dx notation was introduced by Leibniz in his original papers on calculus. It makes various properties of the derivative like
    dy/dz = (dy/dx)*(dx/dz) - chain rule
and
    (dy/dx)*(dx/dy) = 1
transparent.

These properties do not hold because dy/dx behaves like a fraction. You do not prove the chain rule by cancelling the dx on the right hand side. However, once properties like these have been demonstrated the fractional notation becomes quite natural.

The same notation is natural to express relations between different differentials dx and dy in an integral like
    dy = (dy/dx) dx

Once again, the fraction notation reflects properties of differentials, the properties aren''t determined by the notation.

And I wouldn''t say the differential dx has no value. It is meant to correspond to the segment width Δx in a Riemann sum.

Share this post


Link to post
Share on other sites
True, the Leibniz notation is convenient, but it looses its appeal in higher dimensions when say, py/px px/py = -1 in some cases (please, imagine p is a curled d - partial derivative symbol.) Mathematicians couldn''t decide which notation to adopt at first (who is better, Newton or Leibniz - I think we all know the question to that! ,) and ultimately used both. At least, I remember reading something like this. I''m not sure exactly how accurate a story it is. As my knowledgable teacher, Yoda, pointed out: If we use the symbol D for derivative, then it would be most natural to define D^-1 for anti-derivative. E.g., If D F(x) = f(x), then D^-1 f(x) = F(x) + C.

Share this post


Link to post
Share on other sites