Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Private virtual functions, public virtual destructors, and protected non-virtual destructors


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
13 replies to this topic

#1 frob   Moderators   -  Reputation: 22226

Like
4Likes
Like

Posted 19 March 2014 - 10:16 PM

EDIT: Splitting this topic off to a new topic because it is too disruptive where it is. Earlier discussion here

 


"Public and virtual" -> Unnecessary if not inherited, necessary if inherited.
"Protected and non-virtual" -> What protection does that give?
 
Something seems to be going over my head here.  Would you mind elaborating?

(Perhaps my question + your explanation should be split into its own thread in the General Programming subforum if the answer is technical)

 

 

Herb Sutter (the chair of the C++ committee) described it well several times over the years. The reasons to use virtual functions and the proper place for them is something worth frequent reminding.

 

Most experienced c++ programmers should be familiar with all four of these detailed guidelines.

 

Item 4 on the list covers this specific case. Seeing all the protected functions means the class is meant for inheritance, therefore it seems public and virtual is the rule.

 

 

 

Item 1 on the list is especially painful, and I try to educate game programmers when I see it.  Back on one of my DS titles (remember: 66MHz, or 1 million cycles per frame) one of the leads decided to make some of the major classes have public virtual interfaces. Since he was fairly new to C++ coming from a C background and since the code sometimes did rely on polymorphic behavior, he was hard-pressed to be convinced differently. This seemed to be the C++ way of doing things: public virtual functions were found in a few libraries (not the C++ standard libraries, of course!) so he was convinced to use them.

 

Down the road a few weeks we were looking at profiles of the rapidly growing game.  A large percentage (we couldn't be certain yet) of our CPU time was being consumed by virtual dispatch in final-optimized builds. Evidence in hand, it was easy to test and verify. In my test I refactored only two or three of the simplest ones (the ones with no variation in leaf classes) and recovered several percent of each frame's compute cycles.

 

It wasn't too difficult to change once he saw the problem. We removed the virtual functions from the public interface of several major base classes, sometimes creating a member variable that was modified during constructor, other times creating functions for the bits that needed to be dynamically overrriden, and we powered through the major offenders in the code base in about a day. IIRC we recovered about 8% of our CPU time with that simple change.

 

Overuse of virtual functions can quickly ruin game performance. An obvious symptom (if you are actually concerned about it) are a large number of zero-byte or four-byte functions in the symbol table. Usually these are virtual functions that do nothing; in many cases it is possible to completely eliminate the function call overhead.

 

 

As for the complications, you can call "delete pFoo" when the object has a private destructor. What you cannot do is call the destructor directly, which can be an issue with classes like shared_ptr or a custom memory managers. The solution there is to provide a deleter function, which is an optional parameter on shared_ptr and many custom memory managers that defaults to a template value of Foo::~Foo but can be customized with whatever you need.


Edited by frob, 20 March 2014 - 12:13 AM.
Adding notes at the beginning about splitting the topic.

Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


Sponsor:

#2 Servant of the Lord   Crossbones+   -  Reputation: 20296

Like
0Likes
Like

Posted 19 March 2014 - 11:26 PM

Most experienced c++ programmers should be familiar with all four of these detailed guidelines.

Thank you for the link, that helped explain it alot.
 

Item 4 on the list covers this specific case. Seeing all the protected functions means the class is meant for inheritance, therefore it seems public and virtual is the rule.

Ah, I glanced at the glance and didn't see any virtual functions (and Ctrl+F'd for 'abstract', 'inheritance', 'derived', etc... and didn't find any), so I thought you were saying that concrete classes not intended to be inherited should have virtual destructors anyway.

What you and the article seem to be saying is: (correct me if I'm wrong)
A) Concrete classes should almost never be derived from.
B) Interfaces shouldn't be virtual, only implementations.
C) Abstract classes should always either have a protected destructor, or a public virtual destructor (which is what you initially said, and what the article says).

I'm still a bit confused about (C) though - I get why destructors need to be virtual if they are to be derived, but I don't get why (and when) they should be protected if non-virtual.

Part of my confusion was that I never realized you could delete objects with private/protected destructors even from outside the class, and that private virtual functions can still be overridden (just not called) by derived classes.
Two interesting twists on class member accessibility that I have to learn better!

So is it a good practice for concrete classes enforcing their concreteness by declaring the class 'final'?
 
And how does making the destructor non-virtual protected actually help avoid the crashes here:
Base* b = new Derived;
delete b; // Base::~Base() had better be virtual!
...if you can still call delete on classes (such as base classes) with protected/private destructors?

What benefit does a protected destructor actually give (other than the inability to manually call Base::~Base())?
 

Overuse of virtual functions can quickly ruin game performance. An obvious symptom (if you are actually concerned about it) are a large number of zero-byte or four-byte functions in the symbol table. Usually these are virtual functions that do nothing; in many cases it is possible to completely eliminate the function call overhead.

It seems that's more of an issue of, "Let's make everything virtual, just incase we later want to override it", which is definitely a poor way of doing things! smile.png
 
I haven't learned the other lessons yet: No deriving from concretes, and non-virtual public interfaces with virtual (only when needed) implementations. I'll have to bear that in mind - I dislike changing coding styles mid-project, but it makes sense to me (or at least is starting to), so I'll try to apply this in my own code.

Edited by Servant of the Lord, 19 March 2014 - 11:38 PM.

It's perfectly fine to abbreviate my username to 'Servant' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal

[Fly with me on Twitter] [Google+] [My broken website]

[Need web hosting? I personally like A Small Orange]


#3 Hodgman   Moderators   -  Reputation: 30964

Like
2Likes
Like

Posted 20 March 2014 - 12:26 AM

Public and virtual, or protected and non-virtual, those are the options for c++ destructors.

Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual

The bold bit is very important there.

If the class isn't meant to ever be inherited from, then public and nonvirtual destructors are the norm.
Polymorphism / inheritance is rare, therefore, public-virtual and private-nonvirtual destructors should also be rare...
 

As for the complications, you can call "delete pFoo" when the object has a private destructor. What you cannot do is call the destructor directly

Not on my compiler:

class Foo { ~Foo() {} };
void test()
{
	Foo* f = new Foo;
	delete f; // generates error: Foo::~Foo' : cannot access private member declared in class 'Foo'
}

And how does making the destructor non-virtual protected actually help avoid the crashes here:

Base* b = new Derived;
delete b; // Base::~Base() had better be virtual!
...if you can still call delete on classes (such as base classes) with protected/private destructors?

 

You can't. It avoids them at compile time, forcing you not to use delete on a base pointer.

class Base { protected: ~Base() {} };
class Derived : public Base { public: ~Derived() {} };
void test()
{
	Base* b = new Derived;
	delete b; // generates error: Base::~Base' : cannot access protected member declared in class 'Base'
	delete (Derived*)b;//works
}

Edited by Hodgman, 20 March 2014 - 12:44 AM.


#4 frob   Moderators   -  Reputation: 22226

Like
0Likes
Like

Posted 20 March 2014 - 12:37 AM


What you and the article seem to be saying is: (correct me if I'm wrong)
A) Concrete classes should almost never be derived from.
B) Interfaces shouldn't be virtual, only implementations.
C) Abstract classes should always either have a protected destructor, or a public virtual destructor (which is what you initially said, and what the article says).

 

B is backwards. Base classes should be at least partially virtual, some people argue they should be pure virtual (aka abstract).

C is not about abstract classes, but about all base classes.

 

 

The reason for the protected non-virtual and public virtual destructor is normally pretty clear.  Base *p = new Derived(); delete p;  If the base destructor is non-virtual the wrong destructor will be called. Hence the guideline that if a class is designed for inheritance then the destructor should be virtual. On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.

 

 

 


 Overuse of virtual functions can quickly ruin game performance. An obvious symptom (if you are actually concerned about it) are a large number of zero-byte or four-byte functions in the symbol table. Usually these are virtual functions that do nothing; in many cases it is possible to completely eliminate the function call overhead.

It seems that's more of an issue of, "Let's make everything virtual, just incase we later want to override it", which is definitely a poor way of doing things! 

No.  We really were using polymorphic behavior.  In that case it was a pattern of PreUpdate(), Update() and PostUpdate(). Nearly every class left the Pre and Post functions empty, and about half provided Update functions. This meant every frame we had about 20,000 empty function calls that could be eliminated.

 

I've seen the same mistake made on many games and some smaller engines.

 

Instead, make a private virtual function in the base class, and a flag that indicates it should be used. If you want to force them to use it, make it abstract.

 

public:

  bool PreUpdate() { /* lots of other code in base class */ ... if(mNeedsPreUpdate) { OnPreUpdate() }; ... }

  bool Update () { /* lots of other code in base class */ ... if(mNeedsUpdate) { OnUpdate() }; ... }

  bool PostUpdate() {/* lots of other code in base class */ ... if(mNeedsPostUpdate) { OnPostUpdate() }; ... }

protected:

   bool mNeedsPreUpdate;

   bool mNeedsUpdate;

   bool mNeedsPostUpdate;

private:

  virtual void OnPreUpdate() { mNeedsPreUpdate = false; }

  virtual void OnUpdate() { mNeedsUpdate = false; };

  virtual void OnPostUpdate() { mNeedsPostUpdate = false; }

  virtual void AllClassesMustWriteThis() = 0;

 

I'm guessing with your previous comments, that pattern may seem odd.  Consider how much work it saves when you have 10,000 game objects in the world.  The neat trick is that derived classes cannot call the private virtual functions, but they can override them. Similarly, private pure virtual functions mean the derived classes must override the function, but they cannot call it.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#5 Hodgman   Moderators   -  Reputation: 30964

Like
0Likes
Like

Posted 20 March 2014 - 12:42 AM

On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.

Making the destructor private makes it so others can't derive from the class. Making the destructor protected is giving them explicit permission to do so.



#6 frob   Moderators   -  Reputation: 22226

Like
0Likes
Like

Posted 20 March 2014 - 12:47 AM

Also following up on why public virtual is a problematic thing, imagine this:

 

class Base {

...

   virtual void DoStuff() { /* Do assorted things */ }

...

};

 

class Derived : public Base {

...

  virtual void DoStuff() { /* Do different assorted things */ }

...

}

 

Three months later you write this:

 

Base *p = SomeAccessorMethod();

assert (p != NULL);

if(p != null) {

  p->DoStuff(); 

}

 

And to your horror, DoStuff does NOT do what you told it to do.  Perhaps it was meant to clear the screen to green, and some other programmer made it clear the screen to blue.  So you dig through the code and find that DoStuff has been overriden by 97 different child classes, and half of them have bastardized the meaning so much that the call is nearly unusable.  Half change the screen color to various choices, four will open a dialog box, and one of them even ends the game. .... Okay, that has been taken to absurdity, but hopefully the point is clear.

 

Provide concrete functions to your public base classes. You can provide virtual behaviors for child classes to override when needed, and doing so prevents the class from being misused too egregiously, allowing you to still modify the base behavior without rewriting all the derived classes.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#7 frob   Moderators   -  Reputation: 22226

Like
1Likes
Like

Posted 20 March 2014 - 12:48 AM

 

On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.

Making the destructor private makes it so others can't derive from the class. Making the destructor protected is giving them explicit permission to do so.

 

 

That's what I get for replying after midnight. With that kind of mistake, time for bed. 


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#8 frob   Moderators   -  Reputation: 22226

Like
0Likes
Like

Posted 20 March 2014 - 12:59 AM

Also, sorry about not including more of the previous conversation in the split.

 

It is about a base class and derived class pair, where the original poster (not Servant) had built some base classes with non-virtual public destructors, and the question was the meaning for the rules about moving to public virtual or non-public non-virtual, and some other small goodies for that base class.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#9 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 20 March 2014 - 05:57 AM

This post pretty much became a reasoning with myself where I sortof end up agreeing with the article linked.. but would be nice to hear if I went wrong somewhere. :)

 


And to your horror, DoStuff does NOT do what you told it to do. Perhaps it was meant to clear the screen to green, and some other programmer made it clear the screen to blue. So you dig through the code and find that DoStuff has been overriden by 97 different child classes, and half of them have bastardized the meaning so much that the call is nearly unusable. Half change the screen color to various choices, four will open a dialog box, and one of them even ends the game. .... Okay, that has been taken to absurdity, but hopefully the point is clear.

 

This isn't specific to a public virtual functions. If that function delegates the work to an internal class it could have 97 different internal classes instead.

I'm not arguing against the 4 points, just that this problem isn't solved by them. (Though it can tend to lessen expected severity).

 

Also, the private virtual idiom is exactly the same as PIMPL but written in a different (a bit nicer) way.

 

This idiom also removes pure virtual "interfaces" from C++, which may be good or bad depending on who you ask. When using such constructs I have seen performance problems like frob mentioned, but I still use them for a lot of things as it's so easy to understand and use. If nothing else I like that Intellisense doesn't list a bunch of private members on the auto-complete menu.

The difference if of course small, but say for example:

class Reader {
   public:
     virtual size_t readData(void *destination, size_t maxBytes) = 0;
};


class Reader {
   public:
     size_t readData(void *destination, size_t maxBytes) {
        return _readData(destination, maxBytes);
     }

   private:
      virtual size_t _readData(void *destination, size_t maxBytes) = 0;
};


class ReaderImpl {
    friend class Reader;
    private:
      virtual size_t _readData(void *destination, size_t maxBytes) = 0;
};
class Reader {
   public:
     Reader(ReaderImpl *impl) : myImpl(impl) {}
     size_t readData(void *destination, size_t maxBytes) {
         myImpl->_readData(destination, maxBytes);
     }
   private:
     ReaderImpl *myImpl;
};

The second way is a faster way to write the third way.

The first way... I understand the points of the article.. but it's somewhat appealing to not have to implement any methods when writing an "interface".

 

In a sense I feel the "two jobs" from the article (http://www.gotw.ca/publications/mill18.htm) is somewhat arbitrarily chosen:

 

The problem is that "simultaneously" part, because each virtual function is doing two jobs: It's specifying interface because it's public and therefore directly part of the interface Widget presents to the rest of the world; and it's specifying implementation detail, namely the internally customizable behavior, because it's virtual and therefore provides a hook for derived classes to replace the base implementation of that function (if any).

 

The second version already has two jobs. It specifies the interface and how the internal implementation is referenced.

You could write it like this instead:

class ReaderInterface {
  public:
    virtual size_t readData(...) = 0;
};

class Reader : public ReaderInterface {
   public:
    virtual size_t readData(...) {
       return _readData(...);
   }

   private:
     virtual size_t _readData(...) = 0;
};

I would agree with setting the arbitrary choice at the previous point rather than following the example I just gave though... but I don't see how that is necessarily always "better".

 

As C++ doesn't have an 'interface' keyword I guess this might simply be a case where the language-designers have chosen for us, C++ doesn't need and shouldn't have 'interfaces' in that sense. Therefore example 2 from above (matching the article) is the sweet spot and should always(?) be chosen.

 

 

I personally have a tendency to abuse virtual interfaces sometimes... which might be time to change..



#10 TheComet   Crossbones+   -  Reputation: 1619

Like
0Likes
Like

Posted 20 March 2014 - 07:02 AM

I have 3 questions regarding the virtual keyword. The code speaks for itself, I have inserted the 3 questions as comments:

#include <iostream>

class Base
{
public:
    Base() {}
    void doThing() { this->doThingImpl(); }
protected:
    ~Base() {}
private:
    virtual void doThingImpl() {}
};

class Derived : public Base
{
public:
    Derived() {}
    ~Derived() {}  // #1 Why does this have to be virtual if it is not to be inherited from?
private:
    virtual void doThingImpl() {} // #2 Why do people make derivded member functions virtual?
};

class Derived2 : public Base
{
public:
    Derived2() {}
    virtual ~Derived2() {}
private:
    virtual void doThingImpl() final {} // #3 Being a final member function, should this now be virtual or not?
};

int main()
{
    Derived* test = new Derived();
    delete test; // warning: deleting object of polymorphic class type 'Derived' which has
                 // non-virtual destructor might cause undefined behaviour [-Wdelete-non-virtual-dtor]

    Derived2* test2 = new Derived2();
    delete test2;
}

YOUR_OPINION >/dev/null

#11 King Mir   Members   -  Reputation: 2031

Like
0Likes
Like

Posted 20 March 2014 - 08:35 AM

 

I have 3 questions regarding the virtual keyword. The code speaks for itself, I have inserted the 3 questions as comments:

#include <iostream>

class Base
{
public:
    Base() {}
    void doThing() { this->doThingImpl(); }
protected:
    ~Base() {}
private:
    virtual void doThingImpl() {}
};

class Derived : public Base
{
public:
    Derived() {}
    ~Derived() {}  // #1 Why does this have to be virtual if it is not to be inherited from?
private:
    virtual void doThingImpl() {} // #2 Why do people make derivded member functions virtual?
};

class Derived2 : public Base
{
public:
    Derived2() {}
    virtual ~Derived2() {}
private:
    virtual void doThingImpl() final {} // #3 Being a final member function, should this now be virtual or not?
};

int main()
{
    Derived* test = new Derived();
    delete test; // warning: deleting object of polymorphic class type 'Derived' which has
                 // non-virtual destructor might cause undefined behaviour [-Wdelete-non-virtual-dtor]

    Derived2* test2 = new Derived2();
    delete test2;
}

 

1: It doesn't, but there's nothing in the code saying that it can't be inherrited from. The warning protects you from mistakes, but it can have false positives. 

 

2: It is a virtual function, whether marked so or not, and in C++03 you're making that explicit. In C++11 you can mark the function as "overrides" for better documentation of intent and error protection. See also bellow:

 

3: Final implies virtual, so it is redundant information. However, having virtual there makes it easy to use search functions to find virtual methods. So it comes down to style.



#12 frob   Moderators   -  Reputation: 22226

Like
0Likes
Like

Posted 20 March 2014 - 09:06 AM

Also...

 

#1) The function signature became virtual in the base class. Anything derived from Base has the entry in the vtable, so the base class will always look the function up even if you mark it as final. Writing the keyword 'virtual' in front of it at this point is optional, it retains the virtual-ness from the base class. Explicitly marking it as virtual can be useful to other programmers so they don't need to look up the details elsewhere.

 

You can now remove virtual-ness since C++11 introduced the "final" keyword to stop making things virtual below that level, but you probably don't want to do that to a destructor unless you want partially-destroyed objects. The benefit of losing virtual-ness is that you can drop the extra vtable lookup and branch directly to the function, instead of derefencing a pointer and branching there. If you want to stop the inheritance chain mark the class as final instead of the destructor.

 

#2) You make them virtual so you can modify the behavior in a child class.

 

One very frequent example of this is given in the OnPreUpdate, OnUpdate, and OnPostUpdate example above. Imagine you have a bunch of game objects, one of them is a turret. You probably want an update function to turn to face the target, so you can override OnUpdate() and put your behavior in there. If you look through common frameworks you will see a lot of adjustable behavior with OnXxx style virtual functions.

 

If a derived class is also intended to be derived from, for example, you have a base "SelfAimingTurret" class and it is designed as the base for "LaserTurret", "MissileTurret", "SlimeTurret", you can add new virtual functions that are used to specialize the derived classes.

 

#3) Final is new in C++11. When you use it the function continues to be in the vtable but no longer needs to be looked up in some cases.

 

The function is still virtual in some meanings. The class and every derived class will still have a vtable entry for it because the base functionality requires it. It does not remove the vtable entry. The vtable entry for that class and derived classes continue to exist and will point to that function. This is required so that inside Base when they call doThingImpl() it can still look up the function.

 

The function stops being virtual in other meanings. That class and derived classes can directly use the function instead of looking up the address in the vtable. So if inside Derived2 you call doThingImpl() the function does not need to be looked up, it can branch to that function directly.


Edited by frob, 20 March 2014 - 09:10 AM.

Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#13 Servant of the Lord   Crossbones+   -  Reputation: 20296

Like
0Likes
Like

Posted 20 March 2014 - 01:15 PM

B) Interfaces shouldn't be virtual, only implementations.

B is backwards. Base classes should be at least partially virtual, some people argue they should be pure virtual (aka abstract).

That article you linked to was advocating never having the public interface virtual (except the destructor), only the protected/private interface. The private interface can still be pure-virtual. "Guideline #1: Prefer to make interfaces nonvirtual, using Template Method."
 

The reason for the protected non-virtual and public virtual destructor is normally pretty clear.  Base *p = new Derived(); delete p;  If the base destructor is non-virtual the wrong destructor will be called. Hence the guideline that if a class is designed for inheritance then the destructor should be virtual.

Yep, I already do that part.
It's this part:

On the flip side, by making the destructor non-virtual and protected you make it so others cannot derive from the class.

...that was confusing me, because protected doesn't stop inheritance.
 

Making the destructor private makes it so others can't derive from the class. Making the destructor protected is giving them explicit permission to do so.


But what good is a class with a private destructor? If it's private, you can't call delete on it - so it's no good when using it directly. If it's private, you can't derive from it - so it's no good as a base class. A class with a private destructor seems unusable. What am I missing?

Both Frob and Herb Sutter's article are saying, "Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.".
The need for virtual destructors I understand fine. What benefit does protected access give to non-virtual destructors?
It's perfectly fine to abbreviate my username to 'Servant' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal

[Fly with me on Twitter] [Google+] [My broken website]

[Need web hosting? I personally like A Small Orange]


#14 Hodgman   Moderators   -  Reputation: 30964

Like
1Likes
Like

Posted 20 March 2014 - 06:50 PM

@Erik Rufelt
I think the "two jobs" issue only applies to concrete base classes with that can be extended by overriding virtual functions. I.e. Implementation inheritance.

Abstract base classes (I.e. Interface inheritance) doesn't have this problem.

The equivalent of the interface/implements keyword in C++ is to make an abstract base class, and inherit from it using public-virtual inheritance (or just public inheritance if you trust you're not going to get into any multiple-inheritance induced traps). Any other kind of inheritance is equivalent to the 'extends' keyword, and should be pretty rare.

@Servant - private destructors are great for all sorts of alternative interface/implementation schemes. E.g. With COM, you delete an object by calling a virtual Release function. Internally, it would be capable of calling delete this, while at the same time you can be sure that clients are not allowed to.

I also see a common implementation-hiding pattern where there is only ever one derived type (such as in cross-platform libs)
struct Foor : NoCreate { int Get(); }
struct Bar : NoCreate { Foo* Stuff(); }
//cpp
Struct FooImp { int i; }
struct BarImp { FooImp f; }
int Foo::Get() { return ((FooImp*)this)->i; }
Foo* Bar::Stuff() { return (Foo*)&((BarImp*)this)->f; }
Where "NoCreate" had private constructors/destructors/assignment.
This makes the implementation ugly, but keeps the interface very clean.


@TheComet
Your base class destructor would normally be virtual in that case, so it's trying to be helpful and warn you. Seeing you're using new/delete on the same type, it's safe to ignore/disable that warning.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS