functions in .h

Started by
9 comments, last by Aardvajk 13 years, 2 months ago
I wish to keep code portable between java and c++.

for each c++ class, I plan to keep function definitions inside their .h file. Thus the respective .cpp file would usually be blank.

How slow is this compared to function definitions in the .cpp?

I compile both with MSVC and gcc command line. Don't know if they behave differently.


I know MSVC has the pre-compiled header. I always found it annoying have to have include the extra stdafx.h so I turn it off usually. Am I missing time I would have saved in compilation?

Thanks
Advertisement
I'm not sure about the optimizing that MSVC does, but you may be getting code bloat by putting function definitions in the headers. That is, the function code itself is inserted into your program code everywhere the function call is made. In addition, with pre-compiled headers turned off, you're compiling the function code in every file in which you include that header. In general, if you want something inline'd, use the inline storage specifier.

In general, you're missing the intent of header files. Primary uses are to provide forward declarations to functions which may be complex, functions which may need to be revised often during debugging, or functions which require information from other headers ( <string>, <vector>, etc. ) that wouldn't otherwise need to be included with the function definition.

With regard to pre-compiled headers, the "annoying" inclusion of stdafx.h takes maybe 4 seconds to type in. If your program is complex, with mixtures of headers in many files, your compilation time can get quite long without it. However, you don't have to use it if you don't want. There are some libraries (some template libraries, for example) that recommend or require that you disable pre-compilation of headers.

In your overall method, consider the example of a class A which doesn't rely on <vector>. It needs to include a call to a class B function, for example classB::LoadIinformation(). If the implementation (but not the declaration) of classB::LoadInformation() requires <vector>, <fstream>, <iomanip>, etc., etc., you'll end up including a lot of information just to compile the class A code which has nothing at all to do with class A.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

I'm pretty certain it won't have any impact on performance...inline definitions are a hint to the compiler, but it's free to ignore them, and it probably will when it suits it.

But the "proper" way to organize things is in fact to have (long) function definitions in the .cpp files.
I don't see how this is going to help keep code portable between C++ and Java.

I don't see how this is going to help keep code portable between C++ and Java.


This.


C++ and Java are very different languages. You will not be able to port code between them easily, because C++ has a fundamentally different storage philosophy than Java (value versus reference semantics, no garbage collection, deterministic destruction, RAII, etc. etc.).

Porting code requires changing the implementation to suit the underlying language. This means much, much more than just copying and pasting between a .cpp and a .java file.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Depends on the nature of your application and API it uses.

There are parts of code that are very language neutral. I..e, they are not dependent on API platform imports or includes, nor do they allocate memory.

There are significant syntax similarities between the languages to keep code more portable and easy to dissimilate. I.e., being able to intelligibly diff the c++ and java versions of the same class is useful, especially if one must support multiple languages.




[quote name='rip-off' timestamp='1297959372' post='4775432']
I don't see how this is going to help keep code portable between C++ and Java.


This.


C++ and Java are very different languages. You will not be able to port code between them easily, because C++ has a fundamentally different storage philosophy than Java (value versus reference semantics, no garbage collection, deterministic destruction, RAII, etc. etc.).

Porting code requires changing the implementation to suit the underlying language. This means much, much more than just copying and pasting between a .cpp and a .java file.
[/quote]
Don't stick implementation details in your headers if you can avoid it. That includes function definitions.

There are a lot of reasons not to do this, but perhaps the biggest is this: When developing software you will frequently find yourself in a position where you've created a contract and must iterate on the implementation of said contract. Constantly changing your function definitions will require all files that include said header to be rebuilt. However, If you've properly segregated your definitions from your declerations then adjusting the implementation (provided it has no effect on the public decleration) will only require the appropriate source file that changed to be rebuilt.

There are exceptions to this rule, such as when dealing with templates, but in the general non-template case headers should be restricted to declerations only.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Thanks. So combining both of your view points, the essentially inlined function definitions in the header file may or may not contribute to bloat?

The stdafx.h essentially preprocesses the dozens of .h files a .cpp file includes into a single and large .h file somewhere? Or is it a precompilation process that compiles all the symbols/function declarations/function definitions into a header version of object file?

And if just one of the dozen .h files is changed, I assume precompiled header feature is smart enough to not having to reprocess/recompile the complete dozens of .h files?


I'm not sure about the optimizing that MSVC does, but you may be getting code bloat by putting function definitions in the headers. That is, the function code itself is inserted into your program code everywhere the function call is made. In addition, with pre-compiled headers turned off, you're compiling the function code in every file in which you include that header. In general, if you want something inline'd, use the inline storage specifier.

In general, you're missing the intent of header files. Primary uses are to provide forward declarations to functions which may be complex, functions which may need to be revised often during debugging, or functions which require information from other headers ( <string>, <vector>, etc. ) that wouldn't otherwise need to be included with the function definition.

With regard to pre-compiled headers, the "annoying" inclusion of stdafx.h takes maybe 4 seconds to type in. If your program is complex, with mixtures of headers in many files, your compilation time can get quite long without it. However, you don't have to use it if you don't want. There are some libraries (some template libraries, for example) that recommend or require that you disable pre-compilation of headers.

In your overall method, consider the example of a class A which doesn't rely on <vector>. It needs to include a call to a class B function, for example classB::LoadIinformation(). If the implementation (but not the declaration) of classB::LoadInformation() requires <vector>, <fstream>, <iomanip>, etc., etc., you'll end up including a lot of information just to compile the class A code which has nothing at all to do with class A.




I'm pretty certain it won't have any impact on performance...inline definitions are a hint to the compiler, but it's free to ignore them, and it probably will when it suits it.

But the "proper" way to organize things is in fact to have (long) function definitions in the .cpp files.
I'm fairly certain if anything in a PCH is changed, the whole PCH is compiled. You would certainly need to compile everything after the changed file as well, since they could be dependent, but the entire point of a PCH is to include headers which should not be changing frequently (stdio, windows, direct X, boost, etc.).


As for inlining, Herb Sutter says it better than I.
-- gekko
PCH files tend to store the state of the compiler at a specific point (i.e. after processing a bunch of header files). That's why the include for stdafx.h has to be the first thing in your .cpp file. It means that when compiling several .cpp files it doesn't have to repeat all that identical work for each one. If anything included via stadfx.h changes it has to recompile everything so you should put in there system includes, and project includes which are either included everywhere or rarely changed.

In addition to the other problems mentioned above not putting functions in .cpp files is also likely to lead to dependency issues in C++. Code like this contrived example I just put together just can't be compiled in C++ without separating class and function definitions.


// A.h
class B; // This would make it compile if the function bodies were defined outside the classes

class A
{
private: B *m_B;
public: void foo() { m_B->bar(); }
public: void bar() { printf("A"); }
};


// B.h

class A; // This would make it compile if the function bodies were defined outside the classes

class B
{
private: A *m_A;
public: void foo() { m_A->bar(); }
public: void bar() { printf("B"); }
};

This topic is closed to new replies.

Advertisement