• Advertisement
Sign in to follow this  

template .h/.cpp cannot be separate files??

This topic is 4525 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Good morning folks, I have a question about C++ templates. It's an area of C++ that I avoided for a long time, but having seen more and more examples of using them on the web, I've decided to buckle down and learn how to use them. There's a pretty good tutorial I found: http://www.cplusplus.com/doc/tutorial/tut5-1.html But one of the statements it makes is: "The macro-like functionality of templates, forces a restriction for multi-file projects: the implementation (definition) of a template class or function must be in the same file as the declaration. That means we cannot separate the interface in a separate header file and we must include both interface and implementation in any file that uses the templates. " Is this really true? I KNOW that I have seen code where the .h file and the.cpp file for a templated class are separated. Was this an older standard than is used currently in modern compilers? Thanks for your time! Mike

Share this post


Link to post
Share on other sites
Advertisement
The template itself must be fully contained (interface and implementation) in the definition file. When you create a class from that template, that class can have its interface in a .h and its implementation in a .cpp.

Share this post


Link to post
Share on other sites
One thing you can do is use what are known as "inline" files. These are header files that pretend to be source files. They have a lot of the same drawbacks as header files, but if you like to put your function bodies in a different file from your class definition, they'll work well for you.

Watch:


// MyClass.h
#ifndef MYCLASS_H
#define MYCLASS_H

template<typename T>
class MyClass
{
public:
MyClass();
};

#include "MyClass.inl"
#endif




// MyClass.inl
template<typename T>
MyClass<T>::MyClass()
{
....
}

Share this post


Link to post
Share on other sites
Definitions for templated functions and methods in templated classes must be visible to the compiler, ie. in the translation unit, of everywhere they are referenced. So it's best to just include the implementations in the header file where you declare them. Alternately, some people make a seperate .cpp file for the definitions, and include it where necessary, but this is, IMO, a very bad way to do things.

Now, the reason it has to be in the same translation block is that because of the way templates are handled, no code is created for them unless they are used. However, when you use them, code is generated on the fly for that specific template parameter/combination of parameters. So the function definitions need to be available to the compiler to make the code from. It took me a while to understand templates to start with, but once I got that gem of informations, it all became simple. And I realized why they were called templates [grin]

If that was not understandable, please tell me, and I'll try to explain better, because I fear that that explanation is rather badly written..

Share this post


Link to post
Share on other sites
Ok, but check out the code at:

http://www.geometrictools.com/Intersection.html

All of the intersection classes defined here are templated, but the definitions and implementations are separated just like any normal class.

So are you supposed to #include the .cpp instead of the .h if you use a template laid out in this way?

Mike

Share this post


Link to post
Share on other sites
There are a few technicalities.

A template must be contained in one translation unit, a limitation of compiler technology (and probably of the language specification). Header files do not comprise translation units; source files do (header files are not independently compiled). The problem with templates is that you typically provide the complete definition and then let the consumer code instantiate the template for an arbitrary type that conforms to the template requirements at compile-time. If you do not wish to do this, however, providing only a fixed number of type instantiations (and thus potentially lessening the value of your template to your consumer - imagine not being able to instantiate std::vector for your own types), then you can explicitly instantiate your template within the implementation unit. This is what the Geometric Tools' Intersections library does.

Look at it this way: with a "traditional" template, the entire template is in a header file and the compilation unit is comprised of consumer code, the template definition in the header and any other included headers. Compiler generates instantiations at compile time, everything's hunky-dory. In the Intersections library templates, the template declaration is in a header, its definition is in an implementation file, and explicit instantiations are at the bottom of said implementation file. You can not apply those templates to your own arbitrary types, or to types for which explicit instantiations were not built into the vendor-supplied library without modifying and recompiling the original vendor source.

So, it's not the same. If your template is designed to provide genericity for an open number of types, then it all has to be placed in the header. If it simply reduces redundant coding for a fixed set of type dependencies, then you can break it into header/implementation, so long as you explicitly instantiate in the implementation file.

Happy hacking.

Share this post


Link to post
Share on other sites
Quote:
Original post by Oluseyi
There are a few technicalities.

A template must be contained in one translation unit, a limitation of compiler technology (and probably of the language specification).

Not part of the language specification. "Seperate compilation" is possible through the export keyword as specified in the the C++ standard. Of course, that's not a widely implemented part of the standard. And technically a translation unit is more like a preprocessed source file, which incorporates the contents of the headers that it includes.

Share this post


Link to post
Share on other sites
Quote:
Original post by Michael Kron
So are you supposed to #include the .cpp instead of the .h if you use a template laid out in this way?


#including .cpp files is a definate no-no IMO - you'll end up with that file's data in two or more seperate compile units (one for the .cpp, one for the .cpp that #included the .cpp) leading to multiple definition errors.

Yet, a _lot_ of (arguibly stupid or misguided) people choose to lay out their templates just like that, so you're "supposed to" do just that.

I prefer the method used by my standard library - .hpp for C++ headers (these are #included by stub files like "iostream"), .cpp for C++ source files (I'm assuming), and .tcc (or .tpp or similar) for template sources (which can be #included by a header or a source file, depending on wheither you want the automatic-but-duplicating or manual-but-efficient options provided by most compilers).



If you're working with someone else's code, however, it may be simpler to just tolerate their wayword ways.

Share this post


Link to post
Share on other sites
First of all.

HOLY FREAKING CRAP.


Second. I think I understand what you are saying Oluseyi. I was banging my head against the Geometric Tools code and I noticed those "explicit instantiations" for floats and doubles. Having never seen anything like that before, I was confused, but I imagined that it only allowed those types. Mind you, I have never seen syntax like this in my life, and I still don't really understand the flow of events that occurs when the user attempts to create an Intersector instance. Templates made sense when I thought of them as big macro wrappers around unknown types, but "explicit instantiation" boggles me.


Third. So if the template definition is not in the same file as the template declaration, then the compiler will ASSUME you intend to explicitly instantiate it, and compiles it on the spot? If you never had the explicit instantiations, would it just bark at you?


And lastly. If all they're doing is making code that performs similar operations on floats or doubles, why not use polymorphism? If they had structs for vectors, lines, etc. that inherit from a 'number' base class or interface( then subclassing 'NumberVec3' with 'FloatVec3', DoubleVec3 ), wouldn't this be a lot more intuitive and easy to read? It just seems like they're using a tool that is supposed to be a really generic 'catchall' for a couple cases. Maybe that's not the point, or maybe I'm just not used to templates enough.

By the way, thanks so much for all your help with this.
Mike

Share this post


Link to post
Share on other sites
They're going for speed. Using base classes like that would be slower than using explicit template instantiations.

Share this post


Link to post
Share on other sites
How come? Aren't classes/structs just a bunch of fields that index into a Vtable of functions anyway? How is using templated functions/classes with a few floats/doubles as datatypes any different?

Mike

Share this post


Link to post
Share on other sites
No, not really. Classes and structs are implemented as sequential memory addresses that are interpreted in different ways. When you have a virtual function the general implmenetation is to take one of the chunks of memory and interpret it as a pointer to a table of functions that can act on those memory addresses. In contrast, a non-virtual function does not to go through the vtable in order to act on the memory. Thus using non-virtual functions saves you time from the look up in the vtable and also saves you space in the class/struct that you don't need for the vptr and also gives you better cache performace since you don't need to access the vtable. This is the same for any situation when you have unnecessary virtual functions, not just templates.

Share this post


Link to post
Share on other sites
Quote:
Original post by Michael Kron
How come? Aren't classes/structs just a bunch of fields that index into a Vtable of functions anyway? How is using templated functions/classes with a few floats/doubles as datatypes any different?

It can be helpful to think of templates as compile-time polymorphism and traditional, virtual-based polymorphism as runtime polymorphism in C++. This helps explain the tradeoff in terms of size[1] and time/speed[2].

It is an open question, to a certain extent, whether the translation unit limitation for templates is a by-product of C++'s forward declaration requirement, itself a consequence of an anachronistic single-pass compilation approach.



Templates generate an actual class for each instantiation. This is why you can't stick a T<U> and a T<V> into a container together unless you define the container to be of T<X> where U and V derive from X.

Runtime polymorphism involves a small execution overhead of a virtual table lookup and pointer dereference in most implementations (note: this is not guaranteed; compiler implementers are free to implement the details of virtual functions as they please, and I have heard of some doing so as function objects).

Share this post


Link to post
Share on other sites
Hmm. Clearly I'm more confused than I realized. If classes are just sequential blocks of memory, then does that mean that the functions for that class reside in this same block of memory? IE: for every instance of class A, do I have separate copies of all of the functions for class A? Surely that can't be right. I was under the impression that each object maintained a reference back into a table of functions which represent the class methods. But you're saying that's only true for virtual functions? How are normal functions represented in memory then?

Mike

Share this post


Link to post
Share on other sites
Functions are series of bytes locate in the code segment of your executable image. When you call a non-virtual member function of a class, the compiler generates a call/jump to the address of the function in the code segment. Objects do not store pointers or functions in their own storage. If you declare:

struct Point {
int x;
int y;
};

All it stores is the x and the y, no matter how many member functions Point has.

Share this post


Link to post
Share on other sites
Ok, so there IS a jump into a code segment for member functions. How is that different than a virtual function? You're still jumping to code somewhere. Is it because your class code must maintain references to the virtual functions, which is just extra space? Is it because you have to jump once into the class code, and then again into the virtual function?

Mike

Share this post


Link to post
Share on other sites
The call to the code is known at compile time for non-virtual functions. For virtual functions in order to get the address of the function to call you need to dereference the vptr to get the vtable and read the address from there.

Share this post


Link to post
Share on other sites
So, by virtue of compiling a class, you are given hard locations in memory when you can find non-virtual functions when you wish to jump into them, and execute their instructions.

But because virtual functions are set up at link time, you just have a pointer to a vTable, which gets filled in after linkage. Thereafter, you can dereference that vTable pointer, index into the function you desire, and use that as an address to jump to. This is an extra layer of indirection, and extra space for the pointer.

Have I got it right?

Share this post


Link to post
Share on other sites
Close. Function locations for both non-virtual function and virtual functions are determined at link time. That is how you can call non-virtual functions from source files where you don't have the definition for the function. The compiler emits a symbol saying, call this function, and the actual pointer gets patched in at link time. The difference is that for a virtual function you don't know which function you are going to call until runtime, since you have a pointer to something which could be the pointers type or of a derived type. So because at runtime the compiler doesn't know what function to call, it needs to dereference the vptr.

Share this post


Link to post
Share on other sites
Quote:
Original post by Michael Kron

But because virtual functions are set up at link time, you just have a pointer to a vTable, which gets filled in after linkage. Thereafter, you can dereference that vTable pointer, index into the function you desire, and use that as an address to jump to. This is an extra layer of indirection, and extra space for the pointer.

Have I got it right?


The vTable gets filled when the class is instantiated (at run time, not link time). The vTable makes each instance of a class take up more space... pointers to classes are still the same size. You loose speed because the CPU has to look up the address of the virtual function from the vTable and call that, instead of just calling it directly because the linker hard-coded the address of the function.

Share this post


Link to post
Share on other sites
Quote:
Original post by dyerseve
The vTable gets filled when the class is instantiated (at run time, not link time).

No, vtables are filled at link time. vptrs are assigned at runtime. Classes with virtual functions get a pointer to a vtable, not an actual copy of the vtable. (At least on every C++ compiler I've ever seen.)

Share this post


Link to post
Share on other sites
Quote:
Original post by dyerseve
You loose speed because the CPU has to look up the address of the virtual function from the vTable and call that, instead of just calling it directly because the linker hard-coded the address of the function.


Of course, any alternative achieving the same effect will likely have a similar performance overhead, just in a different place, so you're not really "loosing it" per se, when used where appropriate. And this hit only occurs when the compiler can't be certain at compile time the type of the object. The performance of these two calls will be on par with each other:
type value;
value.virtual_function();
value.non_virtual_function();

Fruny SiCrane has allready corrected you on your other misleading point.

Share this post


Link to post
Share on other sites
Ahh, gotcha. At run time the pointer is filled in, because any object of superclass A might actually be subclass B or C, you don't know what to expect.

So each instance has to keep around a vTable pointer and each virtual function call requirse an extra jump. Does all this really add up to a big speed hit?

Mike

Share this post


Link to post
Share on other sites
If you're interested I wrote a (horribly ugly) virtual function implementation as a demonstration in this thread. It's probably quite difficult to follow (as it involves a lot of nasty casting) but you may be able to follow the basics (how each class has a static vtable, each object a member vtable pointer and functions are dynamically dispatched through the vtable).

Quote:
Original post by MaulingMonkey
The preformance of these two calls will be on par with each other:

The performance should be identical too [wink]

Enigma

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement