The const operator

Started by
15 comments, last by Doc 19 years, 1 month ago
Quote:Original post by Polymorphic OOP
Because in the minds of many mathematicians, dot product is the vector equivalent of multiplication extended into multiple dimensions. When written down on paper, dot product is even sometimes represented in the same manner as regular multiplication ...
I'd have to dispute the preceding. The dot product actually has a distinct notational symbol, ·; keep in mind that multiplication is represented in print media as ×, not *.

Quote:Before going into exactly why I believe multiplication is an application of dot product, observe some of the likenesses -- in terms of math, dot-product is commutative, distributive, associative, and it has several common logical operations which parallel those of multiplication of scalars, such as, most-notably, the fact that the dot product is the multiplication of the magnitudes of the vectors when projected along one of their associated lines and the related fact that a vector dot-producted by itself gives you the square of its magnitude.
Dot product is not distributive, as the result of a dot product is not a vector. A · (B · C) != (A · B) · (A · C). The rhs is not even defined.

Neither is it associative, given the fact that it can not be chained indefinitely.

Quote:Aside from that, it makes much more logical sense to use operator * for multiplication than it would, for example, to have the addition operator concatenate strings or the left shift operator to output text to a stream, although I personally will not argue against either one. If you argue against operator* for dot product since it's not multiplication, I'd expect you to have even bigger gripes with the latter two examples.
See domain. For non-mathematical types and operations, misappropriation of mathematical operators (of which the left shift operator is not exactly one, except in computer programming) is more tolerable. The use of the addition operator to represent the concatenation - the addition - of two strings is logical. The left shift operator is not, but it has somehow passed into common parlance, perhaps because certain shifts do, in fact, insert a value - zero.

Quote:I don't expect to change your opinion, neither do I want to, as it is mostly subjective unless you wish to take advantage of the generalizations I've mentioned. You should, however, know that overloading operator* for dot-product is not as obscure as you might think.
You're right, it's not obscure. It's very common, albeit arguably very bad practice. YMMV.
Advertisement
The problem is not that the dot product is not exactly the same as scalar multiplication; the problem is that you also have the cross product to think about. In the context of computer graphics, the cross product and the dot product are about equally used (the cross product more for geometry, the dot product more for rendering). So you're stuck with assigning one to the * operator and one to a named function (I've heard of people using ^ for cross product, but this is YIKES ugly and also has the wrong precedence). Either way you choose to do it--and I've seen people choose to do it both ways--you decrease readability, which is precisely the opposite of what operator overloading was intended to do.
Quote:Original post by Oluseyi
I'd have to dispute the preceding. The dot product actually has a distinct notational symbol, ·; keep in mind that multiplication is represented in print media as ×, not *.

That is why I specifically said "sometimes." While many texts use a slightly different operator (not-so-surprisingly a dot), some use the same exact symbol for multiplication of scalars.

Quote:Original post by Oluseyi
Dot product is not distributive, as the result of a dot product is not a vector. A · (B · C) != (A · B) · (A · C). The rhs is not even defined.

Neither is it associative, given the fact that it can not be chained indefinitely.

Mathworld's definition. Take note that their definition even explicitly states that the dot product is commutative, distributive, and associative, which you are arguing against. Please don't force me to prove this when it's well-documented in any definition.

Quote:You're right, it's not obscure. It's very common, albeit arguably very bad practice. YMMV.

While you call it bad practice, many people consider it very good practice. As I have stated, scalar multiplication is the dot-product in 1 dimensional space. Since they are the same exact operation, you serve to gain by making the operation the same syntax. Doing so allows you to use vectors and scalars, for example, interchangeably in generic algorithms where the abstract concept of the operation makes sense. It's the same reason why you have iterators use the same operators regardless of the container type they are iterating over. The operation is logically the same whether you are iterating over lists or deques or vectors, so the same syntax should be used for all. If you didn't use the same syntax, you wouldn't have the numerous algorithms which work on the abstraction of iterators. The same reasoning holds true for the dot product and multiplication, since the multiplication of two scalars is just the dot product in 1 dimension.

Taken that you understand that, yet don't want to use operator* for fear of confusion, then that's perfectly fine, however, I would still suggest that you should at least be overloading your dot product function for fundamental types which simply does multiplication (or, more-likely, using Koenig lookup).

Edit:

Quote:Original post by Sneftel
The problem is not that the dot product is not exactly the same as scalar multiplication; the problem is that you also have the cross product to think about. In the context of computer graphics, the cross product and the dot product are about equally used (the cross product more for geometry, the dot product more for rendering). So you're stuck with assigning one to the * operator and one to a named function (I've heard of people using ^ for cross product, but this is YIKES ugly and also has the wrong precedence). Either way you choose to do it--and I've seen people choose to do it both ways--you decrease readability, which is precisely the opposite of what operator overloading was intended to do.

Agreed completely, which I also already mentioned. Confusion between dot product and cross product is understandable so a non-member dot-product makes sense too, which is in my opinion one of the only truely valid reasons not to overload operator* for dot product. In taking that approach, however, I would suggest that one also makes a dot product function which works on scalars that would ultimately just do standard multiplication for all of the reasons I just mentioned. Also in that case, any algorithms working with the abstract concept of multiplication using operator* now become useless for vectors even though the dot-product may logically make sense.

As a side note, I also agree that you should not use operator^ for cross product, most-notably because it is merely a special case that the cross product is even a binary operation in 3 dimensions (since it takes n-1 parameters where n is the number of dimensions). If you were to use an operator, it would have to be one that can take any amount of parameters including none, which isn't possible in C++. Because of that, one would probably always be better off using a non-member function for cross product which just takes a different amount of parameters depending on the number of dimensions that the vector resides in (or, a function which takes an iterator range or a single argument representing a group of n-1 vectors, which would obviously be hell for the average programmer though a delite for a template metaprogrammer). Also, note the standard library's inner_product algorithm in the numerics header which can be used to work with the form of abstraction that I have been talking about.

[Edited by - Polymorphic OOP on March 4, 2005 3:09:01 AM]
Quote:Original post by Polymorphic OOP
Quote:Original post by Oluseyi
Dot product is not distributive, ... Neither is it associative, given the fact that it can not be chained indefinitely.
Mathworld's definition. Take note that their definition even explicitly states that the dot product is commutative, distributive, and associative, which you are arguing against. Please don't force me to prove this when it's well-documented in any definition.
A cursory examination of their definition shows that it is only distributive and associative across other operations that yield vectors of the same dimension. Its distributiveness and associativity are not analogous to those of scalar multiplication, which is my point and part of my argument against overloading operator*.

I'm not trying to change your mind, though. My objective is to present a well-founded argument against the practice to serve as a consideration for others who may come across this page in the future. It struck me in a whole new way, recently, that these forums are both a discussion and an archive, so it is imperative that we attempt to present as many sides of an argument as possible, especially when the final decision comes down to a value judgement.

My judgement: use a named friend function.
Quote:Original post by Polymorphic OOP
Quote:You're right, it's not obscure. It's very common, albeit arguably very bad practice. YMMV.

While you call it bad practice, many people consider it very good practice. As I have stated, scalar multiplication is the dot-product in 1 dimensional space. Since they are the same exact operation, you serve to gain by making the operation the same syntax. Doing so allows you to use vectors and scalars, for example, interchangeably in generic algorithms where the abstract concept of the operation makes sense.



Or you could just overload the DotProduct function for scalars, then your code is just as generic, and more typesafe. You cannot, for instance, provide another type into your generic algorithm that has an overloaded * operator, but for which the dot product makes no sense.
Quote:Original post by Oluseyi
Quote:Original post by Polymorphic OOP
Quote:Original post by Oluseyi
Dot product is not distributive, ... Neither is it associative, given the fact that it can not be chained indefinitely.
Mathworld's definition. Take note that their definition even explicitly states that the dot product is commutative, distributive, and associative, which you are arguing against. Please don't force me to prove this when it's well-documented in any definition.
A cursory examination of their definition shows that it is only distributive and associative across other operations that yield vectors of the same dimension. Its distributiveness and associativity are not analogous to those of scalar multiplication, which is my point and part of my argument against overloading operator*.

Again, Oluseyi, I'm sorry but you are incorrect. In every manner that the dot-product equivalent of multipliation of scalars is distributive, associative, and commutative, so is the dot product in other dimensions. In fact, when you were trying to disprove the distributive property in your last reply, you didn't even use the distributive property! You used multiplication inside the parenthesis instead of addition, which is not the distributive property at all. In fact, look back at your example:

Quote:A · (B · C) != (A · B) · (A · C)

Firstly, you state that it would examplify the distributive property, which it does not. Secondly, you claim that the right hand side is not defined, which it is. Thirdly, that relationship does not evenhold true for the multiplication of two scalars (one side would be a*b*c while the other would be a*b*a*c with scalars)!

Note: In retrospect, however, while you are incorrect about the distributive property, I am going to agree with the fact that one should not call the dot product associative, despite how many resources claim that it is (including mathworld) and how I personally was taught. So, I am going to agree that you should not consider dot product associative. However, perhaps not surprisingly, by the same logic certain multiplication in 1-dimension is technically not associative either, for exactly the same reason (and that certain time is exactly when the multiplication is the dot product in 1 dimension).

Quote:I'm not trying to change your mind, though. My objective is to present a well-founded argument against the practice to serve as a consideration for others who may come across this page in the future. It struck me in a whole new way, recently, that these forums are both a discussion and an archive, so it is imperative that we attempt to present as many sides of an argument as possible, especially when the final decision comes down to a value judgement.

I agree wholeheartedly with the idea that people should hear both sides of an opinion and I always make sure that if I am presenting an opinion in the forums that I state it as such, however, please don't argue against facts.

Quote:Original post by Jingo
Or you could just overload the DotProduct function for scalars, then your code is just as generic, and more typesafe. You cannot, for instance, provide another type into your generic algorithm that has an overloaded * operator, but for which the dot product makes no sense.

My judgement: use a named friend function.

Again, I agree that that would be a valid reason to use a non-member function and I have stated that if you make a dotproduct function you should overload it for scalars to perform regular multiplicaton, however, that still can indirectly bring about problems, which I alluded to earlier. Generic algorithms already properly written for 1-dimensional values should always logically work for points, vectors, and scalars in reference to multiple dimensions, since the mathematical logic can always theoretically carry over. If an algorithm in another library, for example, is written templated properly for 1-dimensional values, you should be able to use the same function for multidimensional values. Since that function has no knowledge of your vector type, how would it know to use your dotproduct function? In real life and in the programming world, we use the same operator in regular, old 1-dimensional math for multiplication which holds true for both a scaling multiplication and a dot-product multiplication, it's just that most people are never even taught to discern between the two (though they are logically different).

So, why do we not discern between the two in our every-day math without multiple dimensions, and as well, why don't we run into problems when doing so?

Firstly, the way of performing a scaling operation and a dot product multiplication in 1-dimension are the same in terms of implementation. You can rationally see this by just applying a scaling operation on an n-dimensional vector to 1-dimension and applying the dot-product of two vectors in 1-dimension. Both, in terms of implementation, resolve to a multiplication of two values together. Still they are logically different.

When working in 1-dimension the separate constructs of scalars, vectors, and points still exist, it's just that people don't distinguish between them. Since a lot of people don't think about 1-dimensional math in terms of those separate constructs, I'll give a quick example of them in an every day situation:

Imagine a thermometer which tells the temperature in celsius degrees. It reads the temperature 0 degrees. Mid-day, the temperature went up by 2 degrees celsius. Finally, from the beginning of the day to the end of the day, the temperature has gone up a total of three times the amount that it went up by mid-day. What is the temperature at the end of the day?

Most people will not think of this as an mathematical problem that can use scalars, points, and vectors, and most people will certainly not believe it to be extendable to multiple dimensions. As a matter of fact, both of these are the case, much like any other problem you can come up with!

First, let's just write out the equation as someone normally would:

TempAtBegin + 3 * TempChangeToMidDay = TempAtEnd

Look, no difference between vectors, scalars, and points! Right?

Actually, you are using them, and using them appropriately -- it's just that most people do not think about it as such and never label them differently since scalars, vectors, and points in 1-dimension all just contain 1 components and operations are never ambiguous. In 2-dimensions, 3-dimensions, 4-dimension, or any other amount greater than 1, you do not have that luxury, since scalars always have 1 component while vectors and points have n components.

How do you recognize an n-dimensional scalar then? The key is that no matter how many dimensions a scalar is in, it always has 1 component, and usually that component is unitless. The first property is seen by thinking about the problem in a more abstract sense (if you are just scaling a vector by a coefficient, that coefficient is always going to be 1 component no matter the number of dimensions). Fortunately, this unit difference is more easily observable in 1 dimension. So, we'll start by labelling units:

TempAtBegin is in celsius units

3 has no units Probably a scalar in any number of dimensions

TempChangeToMidDay is in celsius units

TempAtEnd is in celsius units

Okay, so now we think the 3 is a scalar in n-dimension as no matter how complex the space we are in it has one component and it's the one with no units, 3.

There is also the concept of points and vectors in this example. Before going into which are points and which are vectors, remember that a point represents an absolute location and a vector represents a direction with magnitude. As well, you cannot add points together though you can subtract them to get a vector describing how to get from point a to point b. You can also add vectors together to get another vector.

So, firstly, what is the temperature at the beginning of the day? Does it ever make sense to add two absolute temperatures together? While at first you may think yes, the answer is actually no as it would not make sense to. What meaning would you get out of adding the beginning temperature and the end temperature of the day? You can add the two numbers together, but that is in no way meaningful, just like adding together two points in space. However, what if you subtract 16 degrees from 34 degrees to get a "translation" which would tell you how to get from 16 degrees to 34 degrees? There's a meaning in that. So an absolute temperature is a point in 1 dimensional space.

Expanding on that, the result of that operation, a point minus a point, we know is a vector. We can also know that we can scale a vector by a scalar to adjust its magnitude. The result of this operation is just another vector. So let's plug it in:

TempAtBegin is a point

3 is a scalar

TempChangeToMidDay is a vector

TempAtEnd is a point

So, how does this look now:

Point + Scalar * Vector = Point

We know this equation is valid in terms of whether or not the operations are defined when 3 is looked at as a scalar in any number of dimensions. Had we said the 3 was a vector we would result in

Point + Vector * Vector = Point

which is not a mathematically defined operation

and if we looked at the 3 as a point, we would have

Point + Point * Vector = Point

which again is not mathematically defined.

Edit: Back, finishing up

So, even when working in 1 dimension, the concepts of vectors, points, and scalars still exists, whether you personally choose to think about it in that way or not. How should that impact design? It means that if you want a truely modular design then in any way reasonably possible, your algorithms should be applicable to any number of dimensions, which is surprisingly very simple.

So now, imagine an abstract templated algorithm that can work in n-dimensions (which again, is much more simple than it sounds). This could be, for quick example, an algorithm meant to determine if two vectors are pointing in a "similar" direction (the angle between them is less than pi/2 radians). In terms of scalars, this would have the effect of checking if the signs of the values are the same. The abstract solution would be to check the sign of the dot product of the two vectors. This will work in any number of dimensions, including 1.

Acknowledging the relationship between the number of dimensions, as the programmer of the library do you make a dot product function that people must overload/use koenig lookup in order to take advantage of; or do you use operator*? What are the benefits of each?

If you take the overloading or named function koenig approach, it means that for every type introduced by a user of the library -- whether it be vector types or new numeric types, etc., you would have to create a version of that function. Moreso, if another library also needs to use a dot-product style operation, which function would they use? It would be great if they used the same name that you used, but if they didn't, then that means a person using both libraries would also have to create another version of that function so that the type works in their library as well. The benefit would be that you are using a different function name whether you are doing a scaling operation or a dot product or possibly a 3-dimensional cross-product.

If you take the operator overloading approach, you are using the same syntax for the dot product in 1 dimension since it is already established to be a simple multiply. This means that for any scalar type which you are working with, if it already defines scalar multiplication properly, will also work appropriately in your library without the user ever having to define extra functions. Dot product of scalars is already established to be equivalent in implementation to multiplication of scalars coupled with the fact that no ambiguity can result since the concepts of vectors, points, and scalars are logically different. Believe it or not, Oluseyi actually also recognized this reasoning of allowable operator re-use when he stated that it is okay to use binary operator+ for string concatenation since the domain is different. Here, the domain is different as well, since scalars, vectors, and points are all logically different constructs.

So the operator overloading approach requires no extra coding for new scalar types which can be multiplied together, whereas the named function approach does but may avoid confusion. Operator overloading might cause confusion to some, though never ambiguity.

It would be great if in everyday math people thought of vectors and the differences between constructs even in 1 dimension, and it would also be nice if that was reflected in programming, but neither is generally thecase. There isn't an established standard for differentiating dot product from a scaling operation for scalars in C++ since most people do not break scalars into theoretical vectors, points, and scalars (unless you count the inner_product function in the C++ standard libraries which would require knowledge of the implementation details of the types as well as require them to be in a componentized form). There is, however, already a standard for applying dot product to scalars both in programming and outside of programming even though most people don't think about it, and that is by just using the multiplication operator. Since that is already done and can't cause any problems, in my opinion it is best to use it.

[Edited by - Polymorphic OOP on March 5, 2005 2:09:12 PM]
Wow.
My stuff.Shameless promotion: FreePop: The GPL god-sim.

This topic is closed to new replies.

Advertisement