This topic is 4273 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Consider the following C++ code:
class A{};
class B: public A{};
void Frobnicate(const A&) { std::cout << "A"; }
void Frobnicate(const B&) { std::cout << "B"; }

A* a = new B();
Frobnicate(*a);
delete a;

This displays "A", and not "B". Why did the language designers make this choice? Implementation issues notwithstanding, what other reasons could they have had for making this choice? While this approach is consistent with itself, I'm having trouble wrapping my mind around why compile-type binding by default (using the static type of variables) is necessary in a language.

##### Share on other sites
The compiler is just selfconsistent in this case..

Actually, this is not a polymorphic call, since these cannot be done except through a method (a function inside the class/object?)

Search for MSDN help on overloaded function resolution to see how the compiler chooses the right function that is overloaded.

In this case the one with const A& is an exact match.

Although I can't say whether they thought about it or not, to my own evaluation (if I were the designer), this is a case where the more general design must be taken..

The other behavior may be achieved through virtual member methods.
class A{   virtual void Frobnicate() { std::cout << "A"; }};class B: public A{}{   void Frobnicate() { std::cout << "B"; }};void Frobnicate( const A& ){ A.Frobnicate; }A* a = new B();Frobnicate(*a);delete a;

If they took the other way, I don't know how they could achieve the same generality.

##### Share on other sites
To rephrase my question better: why perform overloading based on the static type of the object reference, instead of polymorphic dispatch based on the dynamic type of the object itself? What purpose does the type of the object reference serve, that makes it worthy of being the overloading decision factor? Assuming that the language did behave as I proposed, what frequent construct would have been broken by the new behaviour?

How often does one need to make actions depend on the type of a reference, as opposed to the type of an object?

##### Share on other sites
All overloads are resolved at compile time, based on the parameters' static type; surely it would be nonobvious and unintuitive if this were not the case and *some* (but not all) overloads would instead be deferred to run-time?

virtual member functions are clearly marked as being "special". It is of course arguable whether the language should also support virtual non-member functions (but then we should probably support multiple dispatch too, for completeness' sake).

##### Share on other sites
Quote:
 Original post by SharlinAll overloads are resolved at compile time, based on the parameters' static type; surely it would be nonobvious and unintuitive if this were not the case and *some* (but not all) overloads would instead be deferred to run-time?

Why "some" ? I'm arguing against the use of static reference types in deciding what should be done. In my perfect world, the static type of objects is not used, only the dynamic type is. And I'm arguing that by doing this, no useful feature of the language is cropped.

Quote:
 virtual member functions are clearly marked as being "special". It is of course arguable whether the language should also support virtual non-member functions

What's so special about virtual member functions? In an object-oriented context, I would find on the contrary that it is the non-virtual member functions that are "special".

Quote:
 (but then we should probably support multiple dispatch too, for completeness' sake).

And the compiler would probably do a much better job than anyone else as far as multiple dispatch is concerned.

##### Share on other sites
Quote:
 Original post by ToohrVykTo rephrase my question better: why perform overloading based on the static type of the object reference, instead of polymorphic dispatch based on the dynamic type of the object itself? What purpose does the type of the object reference serve, that makes it worthy of being the overloading decision factor?

How do you propose the compiler handle this?

// Translation unit 1class A {};class B : public A {};void Frobnicate(const B&) { std::cout << "B"; }// Translation unit 2class A;class B;void Frobnicate(const B&);void func(A* a){    Frobnicate(*a);}

##### Share on other sites
Quote:
 Original post by ToohrVykWhy "some" ? I'm arguing against the use of static reference types in deciding what should be done. In my perfect world, the static type of objects is not used, only the dynamic type is. And I'm arguing that by doing this, no useful feature of the language is cropped.

Static typing is the "default" in C++ because of its history as "C with classes" and its "you don't pay for what you don't use" paradigm — dynamic dispatch is more of a "special" feature brought in to support OO. The simple rule of thumb is that overloading is "fast", dynamic dispatch "slow".

Besides, generalizing dynamic typing in the manner you describe would probably necessitate generating RTTI for every single class in the program, and a pointer to some kind of a generalized vtable structure in every single object ever created. This is clearly an unacceptable breach of the C++'s core philosophy.

Quote:
 And the compiler would probably do a much better job than anyone else as far as multiple dispatch is concerned.

Agreed.

##### Share on other sites
Quote:
Original post by joanusdmentia
Quote:
 Original post by ToohrVykTo rephrase my question better: why perform overloading based on the static type of the object reference, instead of polymorphic dispatch based on the dynamic type of the object itself? What purpose does the type of the object reference serve, that makes it worthy of being the overloading decision factor?

How do you propose the compiler handle this?

*** Source Snippet Removed ***

It depends on the language philosophy. A language like C++ prevents instantiation of abstract classes or conversion of base-to-derived pointers, because allowing these might lead to incorrect behaviour (calling a pure virtual function, causing a type mismatch). In this philosophy, since func might be called on an A* that points to something other than B for which no overload is known to exist (a brother or parent of B), such a call would be prevented.

In a more flexible language, such as a slightly more lenient version of C# (or even PHP if it had such things as user-defined types and overloads), the compiler would let the function be called, and throw an incorrect type exception if, at runtime, no correct overload is found.

My preferred level of safety on this issue is to consider such functions as being part of the class' signature. If Frobnicate was a member function instead of a free-standing function, the compiler could rightfully refuse the call to it, because the object of type A has no such function. The problem of translation units (where Frobnicate(const A&) might exist in a third, unmentioned translation unit) is solved by deferring this check to the linking stage: the compiler assumes that an overload of Frobnicate exists for A or a supertype of A, until it's proven wrong by the linker, at which point it would be allowed to complain. Java classes can call each other without knowing each other's signatures at compile time, and C# handles partial classes pretty well - both are very similar to this proposal, with the sole exception that I consider a free-standing function instead of a member.

##### Share on other sites
Quote:
 Original post by SharlinBesides, generalizing dynamic typing in the manner you describe would probably necessitate generating RTTI for every single class in the program, and a pointer to some kind of a generalized vtable structure in every single object ever created. This is clearly an unacceptable breach of the C++'s core philosophy.

Not necessarily. I think we both agree that if dynamic dispatch is wanted, then it is philosophically consistent to create a vtable (or enlarge the existing vtable). Where our disagreement stands is the situation where dynamic dispatch is not needed. To summarize the situation:

B b;Frobnicate(b); // I want this to display "B"A* a = &b;Frobnicate(*a); // I want this to display "A";

My first constatation is: I cannot imagine a situation where this behaviour would be wanted. My second constatation is: if it is really necessary to use this behaviour, why not simply constrain it by using types where dispatch is not possible (because subtyping does not exist) ?

void pFrobnicate(B*) { std::cout << "B"; }void pFrobnicate(A*) { std::cout << "A"; }B b;pFrobnicate(&b); // Displays "B"A* a = &b;pFrobnicate(a); // "A"

The code is not much more complex than it was before (although the pass-by-pointer is somewhat ugly), but since this code is somewhat rare the effect should not be extremely important. Conversely, it removes the need for implementing functions which DO need dynamic dispatch by adding a layer of member functions.

Or, more simply, use the approach currently used for marking "virtual" functions. By marking "non-virtual" functions instead, it would be possible to use dynamic dispatch only when required, thereby causing a much smaller breach of the philosophy (the only problem with this being that it's opt-out instead of opt-in, but then again, so are exceptions on most compilers).

##### Share on other sites
After working on this question some more, I have isolated two potential problems.

The first problem here is that such functions interact badly with namespaces. Consider for instance a function on Base defined in namespace Foo, and an overload on Derived defined in namespace Bar. The Derived overload should be called, if both namespaces are used. What overload should be called if both namespaces are used? If only Foo is used? If only Bar is used?

This problem can be solved by requiring that all overloads be part of the same namespace (therefore allowing the insertion of a method into a namespace as part of this approach). So, Foo::message can only be overloaded by Foo::message, and Bar::message is a different message altogether.

The second problem is that of type-checking. I had mistakenly assumed that types would form a lattice, which is clearly not the case. As mentioned in another thread, checking that the set of overloads is nonambiguous is quite time-consuming.

A third problem would be that someone, somewhere, told me an article exists which claims to prove that such overloading causes conceptual computability problems in certain areas — I haven't been able to isolate the problem or find the article yet.

1. 1
2. 2
Rutin
18
3. 3
4. 4
5. 5

• 14
• 12
• 9
• 12
• 37
• ### Forum Statistics

• Total Topics
631431
• Total Posts
3000039
×