# Inheritance

This topic is 4242 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I am trying to decide the "best" route for design and am questioning (second guessing) my resason for not using inheritance. (Please forgive the code.)
[SOURCE]
int WINAPI WinMain(...)
{
CWinApp WinApp(...);
if (!WinApp.CreateAppWindow(...)
return ( -1 );
.
.
.
CEngine Engine(...);
Engine.Initialize();
.
.
.
}
[/SOURCE]
In the above sniplet, I am not using inheritance. I am simply creating a window and then invoking the "Engine".
[SOURCE]
class CWinApp
{
// Do some windows application stuff here.
};

class CEngine : public CWinApp
{
// Do some OpenGL or DirectX graphics stuff here.
};
[/SOURCE]
In the above example, I am using inheritance. I'm also very confortable on the concept of inheritance but, for some reason I'm fighting with myself on which approach to take. There are advantages as well as disadvantages to using either design. I'd like to have some feed back on which design is "appropriate" or maybe some questions that I can or should be asking myself with approaching this problem. Thanks, Sabrina.

##### Share on other sites
I like to minimize inheriting from classes that significant implementations, and building more implementation on top of it.

If two classes need to talk to each other, a pattern I like using is:

// pure-virtual interface classes:class Interface1 {...};class Interface2 {...};// implementations of the above interfaces:class Implementation1:  public virtual Interface1,  private virtual Interface2 // can be public, if you like{/*...*/};class Implementation2:  public virtual Interface2,  private virtual Interface1{/*...*/};class Final:  public virtual Implementation1,  public virtual Implementation2{};

And now each class can see and access each other's interface, but can't get caught up in the other classes' implementation details.

The problem with monolithic classes is that they encourage code non-locality. Things get tangled up and tied in knots.

##### Share on other sites
Actually there's nothing wrong with aggregation. Inheritance is a useful tool, but sometimes simple containment is a much better solution (also private inheritance can often be reduced to aggregation).
// abstraction layer for appsclass IApp  {...};// implementationclass CWinApp : public IApp {...};// abstraction interfaceclass IEngine {...};class CEngine : public IEngine {...};

The advantage of this approach is, that classes can aggregate the interfaces without depending on the actual implementation.

##### Share on other sites
Inheritance is the tool used to model the "Is A" relationship between objects.

I wouldnt say that an engine is an application, but that an application might have an engine. "Has A" is modeled with aggregation, i.e. a (member) variable.

So in this case i would stick to your first example.

##### Share on other sites

Hi guys,

Thanks for the info. I certainly do understand the concept of "Is A" and "Has A" relationship between objects as Anonymous Poster pointed out, however seems to me that sometime it can get pretty 'sticky'. Although the concept of Inheritance is to bring about relationships between objects, I can't help but think that using Inheritance is also used just to make design simpler. Here’s what I mean, if some function in CEngine needs to get a windows handler (ie. "HDC") I have to access "GetHDC()" function via a global pointer. But if I implement an inheritance design I wouldn’t be worried about that then, make since?

Follow up question: In the second snippet example this there a 'term' used for what I am doing. I darookie mentioned "aggregation", is that the proper term?

Lastly, darookie mentioned in an "Interface" class. I understand how to implement it and also understand the concept of a "pure class"/"pure methods", however I have not understood "WHY" you would want to have one. I only know one reason, and its never really been a good reason. Can you guys clear up the "why" and perhaps its advantage(s) of having one?

Thanks again guys.

Sabrina.

##### Share on other sites
Quote:
 Original post by SMcNeilLastly, darookie mentioned in an "Interface" class. I understand how to implement it and also understand the concept of a "pure class"/"pure methods", however I have not understood "WHY" you would want to have one. I only know one reason, and its never really been a good reason. Can you guys clear up the "why" and perhaps its advantage(s) of having one?

I'll start with this question, since I think that'll explain most things.
In object oriented design (OOD), there are five basic principles (in no particular order):

• The open-closed principle (OCP) that states hat each class should be closed for modification and open to be extended.

• The Liskov Substitution Principle (LSP), declaring that every function that tkes a reference to a base class must be able to be passed a derived class without knowing it.

• The Depedency Inversion Principle (DIP), that says that high-level modules should not depend on low-level modules, but that both should instead solely depend on abstraction. It also says that abstractions should be agnostic to details and that details should depend on abstractions.

• The Interface Segregation Principle (ISP), which basically says that object shall not depend on inerfaces they don't use. E.g. one interface per client-type instead of a bloated "do-it-all"-interface.
• Granularity and Packaging, which is actually a combination of rules for packaging classes together like avoiding cyclic dependecies.

Now the reason for using interfaces are both the DIP and the ISP. When a client needs to access the window-system, you provide a reference to IWindowSystem. Likewise, a client that needs access to - say, the object database aggregated (read: contained) by CEngine, receives an IEngine reference.

Now you not only satisfied the DIP, you also reduced the dependencies by separating the interfaces according to the ISP. In practise this means, that ou from now on are able to change CEngine however you like, without needing to re-compile CEngineClient, as long as you don't touch IEngine and stick to the rules [smile]. This reduces compile time and will help you reducing dependencies.

Now for the private inheritance part. Private inheritance will, in most cases, simply violate the LSP: if a class takes a ISomeInterface reference and CSomeClass privately inherits ISomeInterface, you still cannot pass CSomeClass even though it implements ISomeInterface. If you hide the Is-A relationship that way, you may as well simply aggregate the object by using object composition to model a Has-A relationship. This will satisfy the LSP and clearly shows what you actually want to achieve.

Pat.

PS: Please correct me if I mixed something up here (which is likely since it's kinda late over here and I just got back from running appr. 10km - excuses, excuses, I know [lol]).

##### Share on other sites
In plain english...

To understand any one small bit of code, you have to understand -- to some extent -- the context that code exists in. Abstraction and interfaces are attempts to reduce the size of each bit of code's context.

To understand the impact of the state of any one chunk of data, you have to understand -- to some extent -- every bit of code that modifies and reads that chunk of data.

Abstraction and interfaces shield data from being modified willy-nilly, and can enforce contracts about what a chunk of data means and how it can be modified.

So the pattern of 'abstract interface' does two good things:
1> They reduce the amount of effort to understand code that depends on the interface.
2> They reduce the amount of effort to understand data that the interface wraps.

together, they allow larger and more complex programs to be put together without making human programmer's brains explode.

Now, you can often make do without understanding all of the depenancies of a piece of code or a piece of data. But the cost is often random bugs (as an assumption made in one spot is violated somewhere else) -- and more importantly, hard to find bugs (because to understand how and why the data&code got into a certain state, you have to unroll a ballooning tree of dependancies).

For a physical analogy -- this is why government food standards & company trademarks are good.

You could have no food standards and no trademarks. In which case, you'd have to examine each piece of food, determine who grew it, where it was shipped, examine the lead content, determine if there is likely to be any infectious deseases present in it. Or you could just trust food blindly, and likely end up dead.

Trademarks and Food standards mean, however, that in a developed nation you can simply buy food, and it is very unlikely to be immediately deadly. The food quality problem has been abstracted away -- you don't have to care about the details of how your food is shipped/produced/stored, because "buy it at the supermarket and eat it" works, and isn't very likely to kill you immediately.

##### Share on other sites

Hi,

Thanks for the links darookie, lots of info. I will have to study the docs some more since there's lots to absorb!

Thanks NotAYakk for the info also. I think what your saying is that Interface classes "force a standard" among a team of programmers. I had this in mind before asking, but wanted to know if there was any other reason for doing it. Here is something that I've seen that doesn’t make much sense;

[SOURCE]class Isomeclass{  //... interface/abstract class};class myClass : public Isomeclass{  //....};class myOtherclass : public Isomeclass{  //....};shared_ptr<Isomeclass> m_something;[/SOURCE]

In the example above, I have an abstract class called "Isomeclass" and I have two classes that inherit from "Isomeclass". How do I know which class ("myClass" or "myOtherclass") is being called when I execute the following: "shared_ptr<Isomeclass> m_something"? The only explanation I've head is that "The compiler uses a 'V-Table' and is smart enough to know what you’re talking about."

Sabrina

##### Share on other sites
Quote:
 Original post by SMcNeilIn the example above, I have an abstract class called "Isomeclass" and I have two classes that inherit from "Isomeclass". How do I know which class ("myClass" or "myOtherclass") is being called when I execute the following: "shared_ptr m_something"? The only explanation I've head is that "The compiler uses a 'V-Table' and is smart enough to know what you’re talking about."

Ah - the mysteries of C++ [smile]. The important thing to note wrt that code snippet is, that it shouldn't matter which of the classes is actually being used.
An interface is kind of a contract between the ones that use the services provided by the interface (the "clients") and the programmer who creates the class that implements the interface. The rationale behind doing so is that the implementor is free in choosing how exactly the interface is implemented, while the client regards the inner workings as a black-box that is none of his/her business.

Once it matters to the client, whether "Isomeclass" is implemented by myClass or by myOtherClass you have done something wrong in your design. Let's be a little more specific here. Suppose we have system that controls a display device.
A primitive interface could look like this:
class IDisplay {public:    virtual void clear() = 0 { }    virtual void printCharacter( char ) = 0 { }    virtual void setCursorPosition( int, int ) = 0 { }    virtual void setForegroundColour( Colour const & ) = 0 { }    virtual void setBackgroundColour( Colour const & ) = 0 { }    virtual void getCursorPosition( int &, int & ) const = 0 { }    virtual void getDimensions( int &, int & ) const = 0 { }};

Now let's implement the interface for different devices:
// some primitive ANSI terminal (untested - might not be working at all :)class AnsiTerminalDisplay : public IDisplay {    static std::string const EscapeSequence;    int dimensionX;    int dimensionY;    std::string ToAnsiColour( Colour const & ) const;    int col, row;public:    AnsiTerminalDisplay( int dimensionX = 80, int dimensionY = 40 ) :        dimensionX( dimensionX ), dimensionY( dimensionY ), col( 1 ), row( 1 )    { }    virtual void clear() {        std::cout << EscapeSequence << "J";    }    virtual void printCharacter( char character ) {        std::cout << character;    }    virtual void setCursorPosition( int col, int row ) {        assert( column > 0 && column <= DimensionX );        assert( row > 0 && row <= DimensionY );        this->row = row;        this->col = col;        std::cout << EscapeSequence << row << ";" << col << "f";    }    virtual void setForegroundColour( Colour const & colour ) {        std::cout << EscapeSequence << ToAnsiColour( colour, true );    }    virtual void setBackgroundColour( Colour const & colour ) {        std::cout << EscapeSequence << ToAnsiColour( colour, false );    }    virtual void getCursorPosition( int & col, int & row ) const {        col = this->col;        row = this->row;    }    virtual void getDimensions( int & dimensionX, int & dimensionY ) const {        dimensionX = this->dimensionX;        dimensionY = this->dimensionY;    }};std::string const AnsiTerminalDisplay::EscapeSequence = "\033[";// this class uses some touch-screen deviceclass MyTouchScreenDisplay : public IDisplay {    ITouchScreenDevice & device;public:    MyTouchScreenDisplay( ITouchScreenDevice & device ) : device( device ) {    }    virtual void clear() {        device.clear();    }    virtual void printCharacter( char character ) {        device.plot( character );    }    virtual void setCursorPosition( int col, int row ) {        // suppose the device is addressed by means of pixels        device.moveTo( col * device.getCharWidth(), row  * device.getCharHeight());    }    virtual void setForegroundColour( Colour const & colour ) {        device.setSecondaryColour( colour );    }    virtual void setBackgroundColour( Colour const & colour ) {        device.setPrimaryColour( colour );    }    virtual void getCursorPosition( int & col, int & row ) const {        // suppose the device is addressed by means of pixels        col = device.getPositionX() / device.getCharWidth();        row = device.getPositionY() / device.getCharHeight();    }    virtual void getDimensions( int & width, int & height ) const {        // suppose the device is addressed by means of pixels        width = device.getDisplayWidth() / device.getCharWidth();        height = device.getDisplayHeight() / device.getCharHeight();    }};

The 1 million dollar question is - should any client that uses IDisplay care, whether it actually outputs to an ANSI-terminal or to some touch-screen device?
Right! It shouldn't matter, because these are implementation details and the business-logic that uses IDisplay should work on both devices and never care about the black-box that performs the grunt work beneath that layer of abstraction.

Now while this example looks over-simplified and artificial, it sufficiently illustrates the intention (at least I hope so [wink]). Observe how both constuctors take different, implementation-specific arguments and how the TouchScreenDisplay uses yet another level of abstraction (the "TouchScreenDevice) to perform its work.
The real fun starts, if we were to add yet another display device - say one that instead of plotting stuff, redirects the commands to some remote interface for debugging. If IDisplay clients would have to know about the underlying implmenetation, we'd have to refactor the whole thing again, which would be both time-consuming and error-prone.

I hope that makes some sense, with time you'll stumble across many such issues and things will get clearer. Good design is something that is not just founded on theory, it also needs experience and a good deal of intuition (which is why I'm not particularly good at it [lol]).

Cheers,
Pat.

##### Share on other sites

Thanks. Theres alot of info there eps. for my level of experiance. I'm still trying to get my head around some of the topics you've mentioned. I guess I'm somewhat lost as to what your saying, but I'll keep reading the docs in your link, and what you stated in you posts. Hopfully it will sink in. :)

I would imagine that in an idea world this would be something to strive for, but from my preliminary reading, this is something that is extremely hard to achieve or even impractical in some projects. I'd have to say that the ideas of OCP, LSP, DIP, ISP discussed in the links seem to be (trying to find an appropriate word here) very, very "nit-picky" to the point of being obsessive compulsive. (There is one word that comes to mind, but I’m trying to avoid using it.)
I can, however, appreciate the careful consideration of all components that will be introduced into a project at development time, but certainly one can get carried-away with nit-picking at every single detail of ones design. No doubt fights have abruptly broken out in the meeting rooms where pens, note-pads, staplers, "Post-its" and sharp utensils have been thrown at one another over this very idea of dissecting every single component of a project.

Is the topics OCP, LSP, DIP, ISP docs, all 'academic' or are these ideas actually considered and seriously implmeneted in the developmental life cycle?

Sabrina

##### Share on other sites
Quote:
 Original post by SMcNeilIs the topics OCP, LSP, DIP, ISP docs, all 'academic' or are these ideas actually considered and seriously implmeneted in the developmental life cycle?

Let's look at them:
# The open-closed principle (OCP) that states hat each class should be closed for modification and open to be extended.

Your goal when writing a class is often to produce a "working" class that you can use later. So it is a goal to have a working, bug-free class. If you have a working, bug-free class, and you change it, every bit of code that used that working, bug-free class will now have it's behaviour changed. This is often very dangerous.

# The Liskov Substitution Principle (LSP), declaring that every function that tkes a reference to a base class must be able to be passed a derived class without knowing it.

Most definately this needs to be true. If you inherit from a class, you better be able to be passed as that class, because you will be passed as that class by some git at some time, and if you aren't capable of pretending to be that class, some really annoying bugs will happen.

If you need your parent's implementation, but you don't support the interfaces, inherit privately.

# The Depedency Inversion Principle (DIP), that says that high-level modules should not depend on low-level modules, but that both should instead solely depend on abstraction. It also says that abstractions should be agnostic to details and that details should depend on abstractions.

You want to follow the above if you ever hope to change your low-level implementations. If your high-level modules depend on low-level details, then whenever the low-level details change your high-level modules have to change -- which means you have to rearchitect your program whenever your low-level details change on you!

# The Interface Segregation Principle (ISP), which basically says that object shall not depend on inerfaces they don't use. E.g. one interface per client-type instead of a bloated "do-it-all"-interface.

The fewer interfaces a bit of code depends on, the easier it is to understand that bit of code. Every interface you have access to increases the amount of "state" that the code exists within.

# Granularity and Packaging, which is actually a combination of rules for packaging classes together like avoiding cyclic dependecies.

Don't know this one well enough to comment on it.

Like all rules, you will break the above rules. And following them has a cost. It is easier to just toss together code willy nilly.

But the above rules are designed to make it easier to change something, to fix and find bugs, and in general to "maintain" code. Even during a medium scale project, you are spending a large amount of time maintaining previously-written code -- possibly just while you are writing other code that uses it, but the amount of time a code lies "used but written" is longer than the time it spends "being written".

##### Share on other sites
The "five basic principles" as I've heard them don't mention "Granularity and Packaging", but instead a fifth acronym - SRP, the Single Responsibility Principle. Which I suppose amounts to more or less the same thing.

Proper design is difficult. But you do need to be a certain amount of - certain word :) - just to get "close enough". And there *are* real benefits to be reaped. The principles are really about designing interfaces and designing *to* interfaces, i.e. creating and using abstractions.

OCP: Make an interface. Make sure the interface is complete enough to allow people to do what they want through the interface. Making proper use of the interface should not require any change to what's behind.

LSP: Making proper use of the interface should not require any *knowledge of* what's behind. All things which implement an interface should be treatable as "thing-which-implements-this-interface", without extra mental load. Make the interface represent an abstraction.

SRP: Narrow the interface. Make sure the abstraction presented by the interface is logically coherent. But beyond that, model at a fine-grained level: responsibilities, rather than conceptual "things". Connection and transmission, not "the modem". (Getting the second part of this right is probably the hardest thing to do out of all of this, or at least the least-often done out of laziness)

DIP: Use interface layers to isolate things that communicate with each other. Communicate with an interface, not an object. (In cases where only a single implementation exists, this is rarely done in the real world, even if the principles say it should be)

ISP: If an object has several capabilities, it has several interfaces, not a single interface. (This is required to honour SRP when modelling real-world objects, and I have difficulty thinking of it as a separate principle as a result.)

The idea seems to be that you write some bit of code once and never have to worry about it again; you just use it through its interface. But of course that's the exact opposite of what happens: to get good code, you have to change stuff behind the interfaces all the time, through the process of refactoring. But with the interfaces, things are better *organized*, and properly designed interfaces are more likely to be *stable* interfaces, meaning you change the implementations much more often than the interfaces themselves. (Of course, if analyzing things from the perspective of SRP etc. shows that you *need* an interface change, do it now rather than later.) It's not that you really expect not to have to change something; it's that changes are isolated from each other so that you don't have to think about *other* required changes while making the current one - because ideally, there won't be any.

In particular, in the case where there is no polymorphism and people don't properly address the DIP, it's because it contradicts YAGNI. Sucks, doesn't it?

Note that C++ has *special considerations* in this regard: the only construct that lets you specify the interface of a class (its public member functions) requires you to specify at least part of the implementation (the private data members) at the same time. Creating a proper "interface" in C++ therefore involves patterns like Bridge or PImpl, etc. Note that free functions don't have this problem; you can put global data upon which they depend into the implementation file while just declaring them in the header. And since you can model 'member functions' by passing a "this-pointer" in the first parameter, it's quite possible to do "good OO" in C style. Ultimately, object orientation isn't about classes at all! Classes are simply a means towards the end of defining the interfaces of types and creating objects with particular types.

But yeah. Such is software engineering. You don't know the future. You're paid to *think*, and make good educated guesses. Future-proofing is often a good idea, but also often creates huge amounts of useless boilerplate.