Hiding private methods and variables in a class

Started by
13 comments, last by Hodgman 10 years, 6 months ago

I am working on a multiplatform rendering API with implementations for DirectX 9 and DirectX 11.

Requirements:

- efficiency : no virtual calls

- minimal code duplication : declare the public API once only

- compile-time selection of the implementation

Nice to have:

- exposing to client code only the public API and not private methods and member variables of the actual implementation.

I have been exploring different approaches (e.g. http://www.altdevblogaday.com/2011/06/27/platform-abstraction-with-cpp-templates/ , http://www.altdevblogaday.com/2011/02/01/the-virtual-and-no-virtual/) and then stumbled upon this solution which seems to fit my requirements pretty well:

http://www.crickettechnology.com/blog/?p=116

I quite like this solution and I am considering using it for other systems. I strongly suspect FMOD uses it too, as the FMOD API classes contain only public methods. This solution puts some constraints on the class usage (it seals it and allows manipulation through pointers only). Moreover, the actual .cpp implementation file feels a little hacky and less pretty with all the additional downcasts. I kinda wish C++ supported partial class declarations as part of the language, for cases like mine.

What is your opinion about this approach ?

Thanks

Advertisement

It's called the pimpl idiom, it works, but then so do virtual functions.

The pimpl idiom isn't really going to help you with your quest to eradicate virtual functions though (unless you're intending to statically link the rendering API and build two binaries?). If you're intending to dynamically link the rendering lib, then you are going to need virtual functions (or an ungodly nightmare for the branch predictor in a different form, i.e. if/else in every function). Virtual function calls into a rendering API is a trivial overhead compared to what the rendering API itself has to do. I fear you may be trying to optimise a performance problem that simply doesn't exist.

\edit

Just reread your post, and missed the bit about compile time linking. If you're statically linking, then why the hell do you need the pimpl idiom? Just write two libs using classes in the normal fashion, and then link to either of the static libs. Alternatively, #ifdef D3D9 #else D3D11 #endif. It will give you the same result, will be more lightweight/efficient than pimpl, but will help keep you sane in the process. The only need to hide data in that way, arises when either you need to reduce build times, or you need to minimise a class interface to simplify dynamic linking.

Not sure what FMOD does in this regard, but you could keep a portion of memory allocated before the memory you allocate for the class. Either have some book keeping in there or the data you want private.

It's called the pimpl idiom, it works, but then so do virtual functions.

Not quite. PIMPL is:
class Implementation;
class Interface
{
public:
  Interface();
  void DoStuff();
private:
  Implementation* pimpl;
};
//cpp
Interface::Interface() : pimpl(new Implementation) {}
void Interface::DoStuff() { pimpl->DoStuff(); }

The article suggests:
class Interface
{
public:
  static Interface* Create();
  void DoStuff();
};
//cpp
Interface* Interface::Create() { return (Interface*)new Implementation; }
void Interface::DoStuff() { ((Implementation*)this)->DoStuff(); }
Which is very similar, but it uses the 'this' pointer itself as the pimpl variable.

I make heavy use of this idiom in my engine, even for things that don't switch implementations -- just to keep header files free of non-client details!

I use a bunch of macros to hide the ugly casting, but instead it means I have to write 1 line of macro to describe each function.
e.g.
struct Normal // a regular C++ class
{
  Normal( int x ) { printf("New Normal %d\n", x); }
  void DoStuff()  { printf("Normal::DoStuff\n" ); }
};

struct Interface // one of my 'PIMPL'-ish classes
{
  Interface( int x );
  void DoStuff();
private:
  ~Interface();//forces people to use my eiNewInterface macro, instead of new
};

//cpp
struct Implementation // an implementation of the above interface
{
  Implementation( int x ) { printf("New Implementation %d\n", x); }
  void DoStuff()          { printf("Implementation::DoStuff\n" ); }
};
// magic macros to bind this to the interface
eiImplementInterface( Interface, Implementation );
eiInterfaceConstructor( Interface, (x), int x );
eiInterfaceFunction( void, Interface, DoStuff, () );

//usage 
Scope a( ... );//my allocator
Normal* x = eiNew(a, Normal)( 42 ); // create a normal class
Interface* y = eiNewInterface(a, Interface)( 42 );//create an implementation of an interface

x->DoStuff();
y->DoStuff();
Output:
New Normal 42
New Implementation 42
Normal::DoStuff
Implementation::DoStuff
After the compiler is done with this, there's no overhead. Both virtual and Pimpl cause a double-indirection in the final code :/

Thank you all for your opinions.

I was going to reply to RobTheBloke but Hodgman anticipated me with a well written clarification of the advantages of this approach compared to others. I like the suggestion of using macros to beautify the implementation code. The only thing I dislike is the use of a static Create function in the Interface class. I'd let the application take care of the creation of an actual implementation (based on #ifdef s) at a higher level. That would be the only code aware of the actual implementation while the rest of the code would just see the interface API.

I actually don't like any of the code in Hodegmans example (sorry! I usually whole heartedly agree with your posts smile.png) it seems completely needless, solving a non-existant problem. The definition for the implementation still must be included in-order for the calls to be forwarded, you could put the call forwarding inside the cpp source file. But then you've just added a level of indirection that cannot be inlined, also the header must still be included in the project in-order for the cpp to compile. So all you've added is extra compile time with no 'true' benefit.

What are you trying to gain? To simply hide the member/non-public variables from intellisense? Is it really worth the extra complexity? It would make more sense if you had public functions you didn't want exposed (which is solved by inheritance), or you wanted different implementations (which is solved by polymorphism, aka virtual functions).

Perhaps I'm missing something here? smile.png

In the example given within the linked blog entry, he shows a diamond hierarchy as an example of where inheritance goes wrong. But I would disagree with the example and say he should have rethought the hierarchy if that became necessary.

If you have different, non-virtual, implementations at compile time I would suggest using a simple typedef:

eg:


typedef MyImplementation Implementation;

Where you can switch the imiplementatiion based on compiie time defines:


#if defined( DX9 )
  typedef DX9Implementation Implementation;
#elif defined( DX11 )
  typedef DX11Implementation Implementation;
#else
   #error type undefined
#endif

Or use virtual methods in the front end and non-virtual in the back end. Virtual functions are not going to be a noticeable performance hit unless you use them everywhere.

Either way, doing the previous things just hide member variables and private functions' seems to be counter productive, to my mind at least smile.png

n!

ediit: Please excuse spurious 'i' characters, my keyboard seems to have decided to throw them around unexpectedly as I type. Which is infuriating!

nfactorial: this approach does not add a level of indirection as it simply casts the "this" pointer from the interface to its implementation; that should have zero overhead. Also, contrary to what you said, this approach improves compilation time and code decoupling: whenever the class declaration of an implementation changes, a single cpp file has to recompile instead of all code that includes the interface header file.

The naive typedef-based approach you suggest has two problems: first, all client code now depends directly on the DirectX API; second, the public API is duplicated in both the DX9Implementation and DX11Implementation class declarations. You could perhaps put that in a separate file and include it as part of the class declaration:

class DX9Implementation

{

public:

#include "device.h"

private:

};

but that's uglier in my opinion. Also it does not solve the first point.

Hope that's more clear.

The definition for the implementation still must be included in-order for the calls to be forwarded, you could put the call forwarding inside the cpp source file.

The bit after "//cpp" is supposed to be in the cpp file, hidden from the user. The user just sees a completely clean header, with zero implementation details.

I usually define the Implementation class either in a cpp (completely hidden from the world), or in a different header called foo_impl.h, if multiple implementation cpp files need to know about it. These implementation cpp files can incldue foo_impl.h (which might in turn include API-specific headers) while the user cpp files include just the clean foo.h.

So all you've added is extra compile time with no 'true' benefit.

Simplifying headers can greatly reduce C++ compile times.

I don't recommend doing this on everything. E.g. something like a templated container can't be hidden behind an interface (due to the templates), and would likely benefit from inlining.

With something like a GPU-device, or an OS-window, or a physics-scene though, there's a lot of API-specific types used in the implementation that the user doesn't need to know about. In these cases, I hide the implementation using this technique to keep the headers clean. Usually the loss of inlining in these cases isn't a big deal either, and in any case, you can get that kind of inlining back by enabling Link Time Code Generation.

this approach does not add a level of indirection as it simply casts the "this" pointer from the interface to its implementation; that should have zero overhead.

Hiya, it cannot have zero overhead if it is not inlined. If the code is in the cpp file (as you say) then the code cannot be inlined by the compiler, as it doesn't have the code to inline (unless we're talking about using link-time optimisations here which is fairly compiler dependent to rely on I think). Which is why I thought this implementation wouldn't hide anything.

Ofcourse, all you've added is the call to the function that will invoke your actual implementation which is not a huge addition (and yes, smaller than a virtual function call) but it is that extra level of indirection I was referring to smile.png

And yes, simplifying headers is something to be aimed at, but this code aims at simplifying it at the cost of making the code more clunky to use and understand which I would put higher. If we were really worried about compile times, then I'd suggest looking at a unity based include structure which still provides the fastest C++ compile times I've seen (though, I don't like the features it breaks).

I wasn't recommending the typedef btw, just showing that I don't see any gain in the examples over simply typedef'ing it. And I still don't unfortunately.

Anyway, in the scheme of things I guess if it works great for you then that's fine. There's just a lot I don't like, but we all don't have to agree on it smile.png

n!

this approach does not add a level of indirection as it simply casts the "this" pointer from the interface to its implementation; that should have zero overhead.

Assuming you don't do something silly like use multiple or virtual inheritance.

This topic is closed to new replies.

Advertisement