Private virtual functions, public virtual destructors, and protected non-virtual destructors

Started by
12 comments, last by Hodgman 10 years ago

EDIT: Splitting this topic off to a new topic because it is too disruptive where it is. Earlier discussion here.


"Public and virtual" -> Unnecessary if not inherited, necessary if inherited.
"Protected and non-virtual" -> What protection does that give?

Something seems to be going over my head here. Would you mind elaborating?

(Perhaps my question + your explanation should be split into its own thread in the General Programming subforum if the answer is technical)

Herb Sutter (the chair of the C++ committee) described it well several times over the years. The reasons to use virtual functions and the proper place for them is something worth frequent reminding.

Most experienced c++ programmers should be familiar with all four of these detailed guidelines.

Item 4 on the list covers this specific case. Seeing all the protected functions means the class is meant for inheritance, therefore it seems public and virtual is the rule.

Item 1 on the list is especially painful, and I try to educate game programmers when I see it. Back on one of my DS titles (remember: 66MHz, or 1 million cycles per frame) one of the leads decided to make some of the major classes have public virtual interfaces. Since he was fairly new to C++ coming from a C background and since the code sometimes did rely on polymorphic behavior, he was hard-pressed to be convinced differently. This seemed to be the C++ way of doing things: public virtual functions were found in a few libraries (not the C++ standard libraries, of course!) so he was convinced to use them.

Down the road a few weeks we were looking at profiles of the rapidly growing game. A large percentage (we couldn't be certain yet) of our CPU time was being consumed by virtual dispatch in final-optimized builds. Evidence in hand, it was easy to test and verify. In my test I refactored only two or three of the simplest ones (the ones with no variation in leaf classes) and recovered several percent of each frame's compute cycles.

It wasn't too difficult to change once he saw the problem. We removed the virtual functions from the public interface of several major base classes, sometimes creating a member variable that was modified during constructor, other times creating functions for the bits that needed to be dynamically overrriden, and we powered through the major offenders in the code base in about a day. IIRC we recovered about 8% of our CPU time with that simple change.

Overuse of virtual functions can quickly ruin game performance. An obvious symptom (if you are actually concerned about it) are a large number of zero-byte or four-byte functions in the symbol table. Usually these are virtual functions that do nothing; in many cases it is possible to completely eliminate the function call overhead.

As for the complications, you can call "delete pFoo" when the object has a private destructor. What you cannot do is call the destructor directly, which can be an issue with classes like shared_ptr or a custom memory managers. The solution there is to provide a deleter function, which is an optional parameter on shared_ptr and many custom memory managers that defaults to a template value of Foo::~Foo but can be customized with whatever you need.

Advertisement

Most experienced c++ programmers should be familiar with all four of these detailed guidelines.

Thank you for the link, that helped explain it alot.

Item 4 on the list covers this specific case. Seeing all the protected functions means the class is meant for inheritance, therefore it seems public and virtual is the rule.

Ah, I glanced at the glance and didn't see any virtual functions (and Ctrl+F'd for 'abstract', 'inheritance', 'derived', etc... and didn't find any), so I thought you were saying that concrete classes not intended to be inherited should have virtual destructors anyway.

What you and the article seem to be saying is: (correct me if I'm wrong)
A) Concrete classes should almost never be derived from.
B) Interfaces shouldn't be virtual, only implementations.
C) Abstract classes should always either have a protected destructor, or a public virtual destructor (which is what you initially said, and what the article says).

I'm still a bit confused about (C) though - I get why destructors need to be virtual if they are to be derived, but I don't get why (and when) they should be protected if non-virtual.

Part of my confusion was that I never realized you could delete objects with private/protected destructors even from outside the class, and that private virtual functions can still be overridden (just not called) by derived classes.
Two interesting twists on class member accessibility that I have to learn better!

So is it a good practice for concrete classes enforcing their concreteness by declaring the class 'final'?

And how does making the destructor non-virtual protected actually help avoid the crashes here:
Base* b = new Derived;
delete b; // Base::~Base() had better be virtual!
...if you can still call delete on classes (such as base classes) with protected/private destructors?

What benefit does a protected destructor actually give (other than the inability to manually call Base::~Base())?

Overuse of virtual functions can quickly ruin game performance. An obvious symptom (if you are actually concerned about it) are a large number of zero-byte or four-byte functions in the symbol table. Usually these are virtual functions that do nothing; in many cases it is possible to completely eliminate the function call overhead.

It seems that's more of an issue of, "Let's make everything virtual, just incase we later want to override it", which is definitely a poor way of doing things! smile.png

I haven't learned the other lessons yet: No deriving from concretes, and non-virtual public interfaces with virtual (only when needed) implementations. I'll have to bear that in mind - I dislike changing coding styles mid-project, but it makes sense to me (or at least is starting to), so I'll try to apply this in my own code.

Public and virtual, or protected and non-virtual, those are the options for c++ destructors.

Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual

The bold bit is very important there.

If the class isn't meant to ever be inherited from, then public and nonvirtual destructors are the norm.
Polymorphism / inheritance is rare, therefore, public-virtual and private-nonvirtual destructors should also be rare...

As for the complications, you can call "delete pFoo" when the object has a private destructor. What you cannot do is call the destructor directly

Not on my compiler:


class Foo { ~Foo() {} };
void test()
{
	Foo* f = new Foo;
	delete f; // generates error: Foo::~Foo' : cannot access private member declared in class 'Foo'
}

And how does making the destructor non-virtual protected actually help avoid the crashes here:


Base* b = new Derived;
delete b; // Base::~Base() had better be virtual!
...if you can still call delete on classes (such as base classes) with protected/private destructors?

You can't. It avoids them at compile time, forcing you not to use delete on a base pointer.


class Base { protected: ~Base() {} };
class Derived : public Base { public: ~Derived() {} };
void test()
{
	Base* b = new Derived;
	delete b; // generates error: Base::~Base' : cannot access protected member declared in class 'Base'
	delete (Derived*)b;//works
}

What you and the article seem to be saying is: (correct me if I'm wrong)
A) Concrete classes should almost never be derived from.
B) Interfaces shouldn't be virtual, only implementations.
C) Abstract classes should always either have a protected destructor, or a public virtual destructor (which is what you initially said, and what the article says).

B is backwards. Base classes should be at least partially virtual, some people argue they should be pure virtual (aka abstract).

C is not about abstract classes, but about all base classes.

The reason for the protected non-virtual and public virtual destructor is normally pretty clear. Base *p = new Derived(); delete p; If the base destructor is non-virtual the wrong destructor will be called. Hence the guideline that if a class is designed for inheritance then the destructor should be virtual. On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.


Overuse of virtual functions can quickly ruin game performance. An obvious symptom (if you are actually concerned about it) are a large number of zero-byte or four-byte functions in the symbol table. Usually these are virtual functions that do nothing; in many cases it is possible to completely eliminate the function call overhead.

It seems that's more of an issue of, "Let's make everything virtual, just incase we later want to override it", which is definitely a poor way of doing things!

No. We really were using polymorphic behavior. In that case it was a pattern of PreUpdate(), Update() and PostUpdate(). Nearly every class left the Pre and Post functions empty, and about half provided Update functions. This meant every frame we had about 20,000 empty function calls that could be eliminated.

I've seen the same mistake made on many games and some smaller engines.

Instead, make a private virtual function in the base class, and a flag that indicates it should be used. If you want to force them to use it, make it abstract.

public:

bool PreUpdate() { /* lots of other code in base class */ ... if(mNeedsPreUpdate) { OnPreUpdate() }; ... }

bool Update () { /* lots of other code in base class */ ... if(mNeedsUpdate) { OnUpdate() }; ... }

bool PostUpdate() {/* lots of other code in base class */ ... if(mNeedsPostUpdate) { OnPostUpdate() }; ... }

protected:

bool mNeedsPreUpdate;

bool mNeedsUpdate;

bool mNeedsPostUpdate;

private:

virtual void OnPreUpdate() { mNeedsPreUpdate = false; }

virtual void OnUpdate() { mNeedsUpdate = false; };

virtual void OnPostUpdate() { mNeedsPostUpdate = false; }

virtual void AllClassesMustWriteThis() = 0;

I'm guessing with your previous comments, that pattern may seem odd. Consider how much work it saves when you have 10,000 game objects in the world. The neat trick is that derived classes cannot call the private virtual functions, but they can override them. Similarly, private pure virtual functions mean the derived classes must override the function, but they cannot call it.

On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.

Making the destructor private makes it so others can't derive from the class. Making the destructor protected is giving them explicit permission to do so.

Also following up on why public virtual is a problematic thing, imagine this:

class Base {

...

virtual void DoStuff() { /* Do assorted things */ }

...

};

class Derived : public Base {

...

virtual void DoStuff() { /* Do different assorted things */ }

...

}

Three months later you write this:

Base *p = SomeAccessorMethod();

assert (p != NULL);

if(p != null) {

p->DoStuff();

}

And to your horror, DoStuff does NOT do what you told it to do. Perhaps it was meant to clear the screen to green, and some other programmer made it clear the screen to blue. So you dig through the code and find that DoStuff has been overriden by 97 different child classes, and half of them have bastardized the meaning so much that the call is nearly unusable. Half change the screen color to various choices, four will open a dialog box, and one of them even ends the game. .... Okay, that has been taken to absurdity, but hopefully the point is clear.

Provide concrete functions to your public base classes. You can provide virtual behaviors for child classes to override when needed, and doing so prevents the class from being misused too egregiously, allowing you to still modify the base behavior without rewriting all the derived classes.

On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.

Making the destructor private makes it so others can't derive from the class. Making the destructor protected is giving them explicit permission to do so.

That's what I get for replying after midnight. With that kind of mistake, time for bed.

Also, sorry about not including more of the previous conversation in the split.

It is about a base class and derived class pair, where the original poster (not Servant) had built some base classes with non-virtual public destructors, and the question was the meaning for the rules about moving to public virtual or non-public non-virtual, and some other small goodies for that base class.

This post pretty much became a reasoning with myself where I sortof end up agreeing with the article linked.. but would be nice to hear if I went wrong somewhere. :)


And to your horror, DoStuff does NOT do what you told it to do. Perhaps it was meant to clear the screen to green, and some other programmer made it clear the screen to blue. So you dig through the code and find that DoStuff has been overriden by 97 different child classes, and half of them have bastardized the meaning so much that the call is nearly unusable. Half change the screen color to various choices, four will open a dialog box, and one of them even ends the game. .... Okay, that has been taken to absurdity, but hopefully the point is clear.

This isn't specific to a public virtual functions. If that function delegates the work to an internal class it could have 97 different internal classes instead.

I'm not arguing against the 4 points, just that this problem isn't solved by them. (Though it can tend to lessen expected severity).

Also, the private virtual idiom is exactly the same as PIMPL but written in a different (a bit nicer) way.

This idiom also removes pure virtual "interfaces" from C++, which may be good or bad depending on who you ask. When using such constructs I have seen performance problems like frob mentioned, but I still use them for a lot of things as it's so easy to understand and use. If nothing else I like that Intellisense doesn't list a bunch of private members on the auto-complete menu.

The difference if of course small, but say for example:


class Reader {
   public:
     virtual size_t readData(void *destination, size_t maxBytes) = 0;
};


class Reader {
   public:
     size_t readData(void *destination, size_t maxBytes) {
        return _readData(destination, maxBytes);
     }

   private:
      virtual size_t _readData(void *destination, size_t maxBytes) = 0;
};


class ReaderImpl {
    friend class Reader;
    private:
      virtual size_t _readData(void *destination, size_t maxBytes) = 0;
};
class Reader {
   public:
     Reader(ReaderImpl *impl) : myImpl(impl) {}
     size_t readData(void *destination, size_t maxBytes) {
         myImpl->_readData(destination, maxBytes);
     }
   private:
     ReaderImpl *myImpl;
};

The second way is a faster way to write the third way.

The first way... I understand the points of the article.. but it's somewhat appealing to not have to implement any methods when writing an "interface".

In a sense I feel the "two jobs" from the article (http://www.gotw.ca/publications/mill18.htm) is somewhat arbitrarily chosen:

The problem is that "simultaneously" part, because each virtual function is doing two jobs: It's specifying interface because it's public and therefore directly part of the interface Widget presents to the rest of the world; and it's specifying implementation detail, namely the internally customizable behavior, because it's virtual and therefore provides a hook for derived classes to replace the base implementation of that function (if any).

The second version already has two jobs. It specifies the interface and how the internal implementation is referenced.

You could write it like this instead:


class ReaderInterface {
  public:
    virtual size_t readData(...) = 0;
};

class Reader : public ReaderInterface {
   public:
    virtual size_t readData(...) {
       return _readData(...);
   }

   private:
     virtual size_t _readData(...) = 0;
};

I would agree with setting the arbitrary choice at the previous point rather than following the example I just gave though... but I don't see how that is necessarily always "better".

As C++ doesn't have an 'interface' keyword I guess this might simply be a case where the language-designers have chosen for us, C++ doesn't need and shouldn't have 'interfaces' in that sense. Therefore example 2 from above (matching the article) is the sweet spot and should always(?) be chosen.

I personally have a tendency to abuse virtual interfaces sometimes... which might be time to change..

I have 3 questions regarding the virtual keyword. The code speaks for itself, I have inserted the 3 questions as comments:


#include <iostream>

class Base
{
public:
    Base() {}
    void doThing() { this->doThingImpl(); }
protected:
    ~Base() {}
private:
    virtual void doThingImpl() {}
};

class Derived : public Base
{
public:
    Derived() {}
    ~Derived() {}  // #1 Why does this have to be virtual if it is not to be inherited from?
private:
    virtual void doThingImpl() {} // #2 Why do people make derivded member functions virtual?
};

class Derived2 : public Base
{
public:
    Derived2() {}
    virtual ~Derived2() {}
private:
    virtual void doThingImpl() final {} // #3 Being a final member function, should this now be virtual or not?
};

int main()
{
    Derived* test = new Derived();
    delete test; // warning: deleting object of polymorphic class type 'Derived' which has
                 // non-virtual destructor might cause undefined behaviour [-Wdelete-non-virtual-dtor]

    Derived2* test2 = new Derived2();
    delete test2;
}
"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

This topic is closed to new replies.

Advertisement