I'm wondering if I should always use
containers of pointers or just
plain containers when the objects aren't extremely small.
An example case: The object X contains a vector, which is filled with data by the constructor of X, and after that it is left unchanged for the entire lifetime of X, but is read frequently. The exact ratio between creation and read operations varies a lot.
Example of using "vector of objects":
class X {
...
std::vector<Foo> v;
};
X::X(some_parameters...) {
v.resize(n);
for(int i=0; i<n; ++i) {
v.push_back(Foo(some_parameters...));
}
}
const Foo& X::getData(some_parameters...) {
return v[some_index...];
}
Example of using "vector of pointers":
class X {
...
std::vector<Foo *> v;
};
X::X(some_parameters...) {
v.resize(n);
for(int i=0; i<n; ++i) {
v.push_back(new Foo(some_parameters...));
}
}
const Foo& X::getData(some_parameters...) {
return *v[some_index...];
}
The question is whether, in the first case, the compiler will be able to optimize away the copy constructor and destructor (or assignment operator) calls for the object passed to push_back, even if the Foo object has custom-written copy constructor, destructor and assignment operator (then, presumably, we will also save calls to new and delete for each Foo object and so save a lot of performance compared to the 2nd example for ALL cases). Or, will the second way of writing it become faster when the Foo objects get larger and more complex?