Quote:Original post by Oluseyi
Quote:Original post by Polymorphic OOP
Quote:Original post by Oluseyi
Dot product is not distributive, ... Neither is it associative, given the fact that it can not be chained indefinitely.
Mathworld's definition. Take note that their definition even explicitly states that the dot product is commutative, distributive, and associative, which you are arguing against. Please don't force me to prove this when it's well-documented in any definition.
A cursory examination of their definition shows that it is only distributive and associative across other operations that yield vectors of the same dimension. Its distributiveness and associativity are not analogous to those of scalar multiplication, which is my point and part of my argument against overloading operator*.
Again, Oluseyi, I'm sorry but you are incorrect. In every manner that the dot-product equivalent of multipliation of scalars is distributive, associative, and commutative, so is the dot product in other dimensions. In fact, when you were trying to disprove the distributive property in your last reply, you didn't even use the
distributive property! You used multiplication inside the parenthesis instead of addition, which is not the distributive property at all. In fact, look back at your example:
Quote:A · (B · C) != (A · B) · (A · C)
Firstly, you state that it would examplify the distributive property, which it does not. Secondly, you claim that the right hand side is not defined, which it is. Thirdly, that relationship does not evenhold true for the multiplication of two scalars (one side would be a*b*c while the other would be a*b*a*c with scalars)!
Note: In retrospect, however, while you are incorrect about the distributive property, I am going to agree with the fact that one should not call the dot product associative, despite how many resources claim that it is (including mathworld) and how I personally was taught. So, I am going to agree that you should not consider dot product associative. However, perhaps not surprisingly, by the same logic certain multiplication in 1-dimension is technically not associative either, for exactly the same reason (and that certain time is exactly when the multiplication is the dot product in 1 dimension).Quote:I'm not trying to change your mind, though. My objective is to present a well-founded argument against the practice to serve as a consideration for others who may come across this page in the future. It struck me in a whole new way, recently, that these forums are both a discussion and an archive, so it is imperative that we attempt to present as many sides of an argument as possible, especially when the final decision comes down to a value judgement.
I agree wholeheartedly with the idea that people should hear both sides of an opinion and I always make sure that if I am presenting an opinion in the forums that I state it as such, however, please don't argue against
facts.
Quote:Original post by Jingo
Or you could just overload the DotProduct function for scalars, then your code is just as generic, and more typesafe. You cannot, for instance, provide another type into your generic algorithm that has an overloaded * operator, but for which the dot product makes no sense.
My judgement: use a named friend function.
Again, I agree that that would be a valid reason to use a non-member function and I have stated that if you make a dotproduct function you should overload it for scalars to perform regular multiplicaton, however, that still can indirectly bring about problems, which I alluded to earlier. Generic algorithms already properly written for 1-dimensional values should always logically work for points, vectors, and scalars in reference to multiple dimensions, since the mathematical logic can always theoretically carry over. If an algorithm in another library, for example, is written templated properly for 1-dimensional values, you should be able to use the same function for multidimensional values. Since that function has no knowledge of your vector type, how would it know to use your dotproduct function? In real life and in the programming world, we use the same operator in regular, old 1-dimensional math for multiplication which holds true for both a scaling multiplication and a dot-product multiplication, it's just that most people are never even taught to discern between the two (though they are logically different).
So, why do we not discern between the two in our every-day math without multiple dimensions, and as well, why don't we run into problems when doing so?
Firstly, the way of performing a scaling operation and a dot product multiplication in 1-dimension are the same in terms of implementation. You can rationally see this by just applying a scaling operation on an n-dimensional vector to 1-dimension and applying the dot-product of two vectors in 1-dimension. Both, in terms of implementation, resolve to a multiplication of two values together. Still they are logically different.
When working in 1-dimension the separate constructs of scalars, vectors, and points still exist, it's just that people don't distinguish between them. Since a lot of people don't think about 1-dimensional math in terms of those separate constructs, I'll give a quick example of them in an every day situation:
Imagine a thermometer which tells the temperature in celsius degrees. It reads the temperature 0 degrees. Mid-day, the temperature went up by 2 degrees celsius. Finally, from the beginning of the day to the end of the day, the temperature has gone up a total of three times the amount that it went up by mid-day. What is the temperature at the end of the day?
Most people will not think of this as an mathematical problem that can use scalars, points, and vectors, and most people will certainly not believe it to be extendable to multiple dimensions. As a matter of fact, both of these are the case, much like any other problem you can come up with!
First, let's just write out the equation as someone normally would:
TempAtBegin + 3 * TempChangeToMidDay = TempAtEnd
Look, no difference between vectors, scalars, and points! Right?
Actually, you are using them, and using them appropriately -- it's just that most people do not think about it as such and never label them differently since scalars, vectors, and points in 1-dimension all just contain 1 components and operations are never ambiguous. In 2-dimensions, 3-dimensions, 4-dimension, or any other amount greater than 1, you do not have that luxury, since scalars always have 1 component while vectors and points have n components.
How do you recognize an n-dimensional scalar then? The key is that no matter how many dimensions a scalar is in, it always has 1 component, and
usually that component is unitless. The first property is seen by thinking about the problem in a more abstract sense (if you are just scaling a vector by a coefficient, that coefficient is always going to be 1 component no matter the number of dimensions). Fortunately, this unit difference is more easily observable in 1 dimension. So, we'll start by labelling units:
TempAtBegin is in
celsius units3 has
no units Probably a scalar in any number of dimensions
TempChangeToMidDay is in
celsius unitsTempAtEnd is in
celsius unitsOkay, so now we think the 3 is a scalar in n-dimension as no matter how complex the space we are in it has one component and it's the one with no units, 3.
There is also the concept of points and vectors in this example. Before going into which are points and which are vectors, remember that a point represents an absolute location and a vector represents a direction with magnitude. As well, you cannot add points together though you can subtract them to get a vector describing how to get from point a to point b. You can also add vectors together to get another vector.
So, firstly, what is the temperature at the beginning of the day? Does it ever make sense to add two absolute temperatures together? While at first you may think yes, the answer is actually no as it would not make sense to. What meaning would you get out of adding the beginning temperature and the end temperature of the day? You can add the two numbers together, but that is in no way meaningful, just like adding together two points in space. However, what if you subtract 16 degrees from 34 degrees to get a "translation" which would tell you how to get from 16 degrees to 34 degrees? There's a meaning in that. So an absolute temperature is a point in 1 dimensional space.
Expanding on that, the result of that operation, a point minus a point, we know is a vector. We can also know that we can scale a vector by a scalar to adjust its magnitude. The result of this operation is just another vector. So let's plug it in:
TempAtBegin is a
point3 is a
scalarTempChangeToMidDay is a
vectorTempAtEnd is a
pointSo, how does this look now:
Point + Scalar * Vector = Point
We know this equation is valid in terms of whether or not the operations are defined when 3 is looked at as a scalar in any number of dimensions. Had we said the 3 was a vector we would result in
Point + Vector * Vector = Point
which is not a mathematically defined operation
and if we looked at the 3 as a point, we would have
Point + Point * Vector = Point
which again is not mathematically defined.
Edit: Back, finishing up
So, even when working in 1 dimension, the concepts of vectors, points, and scalars still exists, whether you personally choose to think about it in that way or not. How should that impact design? It means that if you want a truely modular design then in any way reasonably possible, your algorithms should be applicable to any number of dimensions, which is surprisingly very simple.
So now, imagine an abstract templated algorithm that can work in n-dimensions (which again, is much more simple than it sounds). This could be, for quick example, an algorithm meant to determine if two vectors are pointing in a "similar" direction (the angle between them is less than pi/2 radians). In terms of scalars, this would have the effect of checking if the signs of the values are the same. The abstract solution would be to check the sign of the dot product of the two vectors. This will work in any number of dimensions, including 1.
Acknowledging the relationship between the number of dimensions, as the programmer of the library do you make a dot product function that people must overload/use koenig lookup in order to take advantage of; or do you use operator*? What are the benefits of each?
If you take the overloading or named function koenig approach, it means that for every type introduced by a user of the library -- whether it be vector types or new numeric types, etc., you would have to create a version of that function. Moreso, if another library also needs to use a dot-product style operation, which function would they use? It would be great if they used the same name that you used, but if they didn't, then that means a person using both libraries would also have to create another version of that function so that the type works in their library as well. The benefit would be that you are using a different function name whether you are doing a scaling operation or a dot product or possibly a 3-dimensional cross-product.
If you take the operator overloading approach, you are using the same syntax for the dot product in 1 dimension since it is already established to be a simple multiply. This means that for any scalar type which you are working with, if it already defines scalar multiplication properly, will also work appropriately in your library without the user ever having to define extra functions. Dot product of scalars is already established to be equivalent in implementation to multiplication of scalars coupled with the fact that no ambiguity can result since the concepts of vectors, points, and scalars are logically different. Believe it or not, Oluseyi actually also recognized this reasoning of allowable operator re-use when he stated that it is okay to use binary operator+ for string concatenation since the domain is different. Here, the domain is different as well, since scalars, vectors, and points are all logically different constructs.
So the operator overloading approach requires no extra coding for new scalar types which can be multiplied together, whereas the named function approach does but may avoid confusion. Operator overloading might cause confusion to some, though never ambiguity.
It would be great if in everyday math people thought of vectors and the differences between constructs even in 1 dimension, and it would also be nice if that was reflected in programming, but neither is generally thecase. There isn't an established standard for differentiating dot product from a scaling operation for scalars in C++ since most people do not break scalars into
theoretical vectors, points, and scalars (unless you count the inner_product function in the C++ standard libraries which would require knowledge of the implementation details of the types as well as require them to be in a componentized form). There is, however, already a standard for applying dot product to scalars both in programming and outside of programming even though most people don't think about it, and that is by just using the multiplication operator. Since that is already done and can't cause any problems, in my opinion it is best to use it.
[Edited by - Polymorphic OOP on March 5, 2005 2:09:12 PM]