Does adding a method forces dependent objects to be recompiled?

This topic is 419 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Some of my classes triggers lot's of recompile when changed. In order to speed up the change-test-refactor cycle (i.e unit testing), I've started breaking some classes in smaller classes so that I can reduce dependencies and dilute the amount of includes per source file. Even though this feels like a full recompile will take longer, incremental builds should become faster. One way to achieve this is the Pimpl idiom, but doing it for many small classes feels like overkill (and lots of boilerplate code).

I've read that adding virtual methods, the first non-default constructor or changing/reordering the amount of member variables of a class will cause objects dependent from that class to be fully recompiled as well. But I couldn't find anything about simply adding methods (mostly protected, but occasionally, public). Some tests seems to indicate that they don't, but I'd like some pointers to confirm if this is reliable, or only an illusion. Does anyone have any hints?

Too keep things simple, it is guaranteed that each header and source files are about a single class.

Share on other sites

Sometimes there are also very more dependencies as you might think when including headers that already include headers. For example

//types.h
#include <platform.h>

#include <types.h>

#include <types.h>


Causes the compiler in header2 to include types first and then header1 that also includes types so when types or platform changes you need to reload both headers because both have changed where you need to include header1 at this point only.

Its not much but it safes also some time

Share on other sites

Do you know about forward declaration? In many cases you can move a header file into a cpp file (from the primary h file for that TLU) and just forward declare the class that the header in the header. This can cut down on a lot of meaningless inter-dependencies that care caused by including header files in other header files, though it's not possible to do in all cases.

I've been using a lot of forward declarations, though I did not know about the destructor constraint. This really explains some error messages of incomplete types!

Sometimes there are also very more dependencies as you might think when including headers that already include headers.

Good thinking, I'll check if I can refactor my includes in a way that reduces dependencies,

Share on other sites

I did not know about the destructor constraint. This really explains some error messages of incomplete types!

I should clarify that the destructor definition can't be inline. That's why the destructor is required to be explicit. The actual constraint is that the destructor itself must be forward declared, which means that you may need to define an empty one in the cpp file.

derp.h
#pragma once

class Derp {
public:
int herp;
void say();
};

derp.cpp
#include "derp.h"
#include <iostream>

void Derp::say() {
std::cout << herp << "\n";
}

foo.h
#pragma once
#include <memory>

class Derp;

class Foo {
public:
Foo();
~Foo();
void say();
std::unique_ptr<Derp> val;
};

foo.cpp
#include "foo.h"
#include "derp.h"

Foo::Foo() : val(std::make_unique<Derp>()) {
val->herp = 7;
}

Foo::~Foo() {
//nothing
}

void Foo::say() {
val->say();
}

main.cpp
#include <iostream>
#include "foo.h"

int main() {
Foo foo;
foo.say();

std::cin.get();
}

If you comment out the destructor on Foo you'll get a template error because it makes the destructor the default, which is inline, which means that it's referring to the smart pointer's destructor, which makes use of the type and causes the whole cluster-bang of fail.