# Does this avoid memory hits?

This topic is 3734 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Does my code avoid memory hits?
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;

int main()
{
vector <int> myVec;

cout << "How many numbers would you like to add? ";
int numbers;
cin >> numbers;

for (int i = 0; i < numbers; i++)
{
if (myVec.size() == myVec.capacity() )
{
myVec.reserve (myVec.capacity() + 1);
cout << "More memory allocated! Capacity = " << myVec.capacity() << "\n";
}

int num;
cin >> num;
myVec.push_back (num);
}

system ("PAUSE");

return 0;
}


##### Share on other sites
std::vector resizes when it needs to, you don't need to tell it to. Also, just create it to the size you need at the beginning, ie:

std::vector<int> myvector(10);

##### Share on other sites
Since you know the size immediately after you ask the user but before you do any processing, you can reserve once and push_back, or resize (as Dave pointed out) and use indexing.

Your attempt to manually adjust the vector's allocated storage is most likely less efficient than simply doing nothing (e.g., never reserving at all). You always reserve one more than the current capacity, which means you are potentially reserving more memory every iteration of the loop. Most vector implementations double the reserved size when they grow, resulting in fewer actual allocations and copies.

Just reserve() up front, though, it's the most ideal solution.

##### Share on other sites

Quote:
 Original post by jpetrieYour attempt to manually adjust the vector's allocated storage is most likely less efficient than simply doing nothing (e.g., never reserving at all).You always reserve one more than the current capacity, which means you are potentially reserving more memory every iteration of the loop. Most vector implementations double the reserved size when they grow, resulting in fewer actual allocations and copies.Just reserve() up front, though, it's the most ideal solution.

Why is it better not to do anything?

I am increasing only by one, so that I don't have useless capacity on the vector, and I reserve memory only when needed. (I've found that VC++ 2005 EE increases the capacity by 1.5 times) -> Isn't it better to have the same size as capacity, so less memory will be used? Please explain.

##### Share on other sites
Quote:
 Why is it better not to do anything?I am increasing only by one, so that I don't have useless capacity on the vector, and I reserve memory only when needed. (I've found that VC++ 2005 EE increases the capacity by 1.5 times) -> Isn't it better to have the same size as capacity, so less memory will be used? Please explain.

It depends. In this trivial case, you're talking about a couple bytes of "extra" memory. The performance impact of the extra allocation and copy operations will outweight the "loss" of that piddling puddle of bytes -- although both are pretty much ignorable in terms of performance.

If you are concerned about speed, your method is very inefficient, because each iteration of the loop reallocates storage and copies the vector contents. Reallocation isn't neccessarily cheap, as the runtime must perform a walk through the heap trying to locate a block of the appropriate size to allocate. Then the vector needs to copy its contents over to the new block and destroy the existing storage. This can also lead to fragmentation, further impacting performance later on.

If you're optimizing for memory size, your method is still inefficient because it is slower and provides the same results as simply reserving or resizing up-front (which involves less allocation and copying). In cases where you don't know the size a priori, but still want to optimize for size, you should allow the vector to grow itself naturally (providing optimal memory allocation patterns), and the compact the storage once, after you've finalized the item count.

Either way, the performance implications aren't very noticable here, but your code is still poor because you're manually doing things the vector does for you -- and does better.

##### Share on other sites
Quote:
Original post by sheep19

Quote:
 Original post by jpetrieYour attempt to manually adjust the vector's allocated storage is most likely less efficient than simply doing nothing (e.g., never reserving at all).You always reserve one more than the current capacity, which means you are potentially reserving more memory every iteration of the loop. Most vector implementations double the reserved size when they grow, resulting in fewer actual allocations and copies.Just reserve() up front, though, it's the most ideal solution.

Why is it better not to do anything?

I am increasing only by one, so that I don't have useless capacity on the vector, and I reserve memory only when needed. (I've found that VC++ 2005 EE increases the capacity by 1.5 times) -> Isn't it better to have the same size as capacity, so less memory will be used? Please explain.

Resizing the vector is relativly slow (compared to not resizing it) so you want to do it as infrequently as possible, using a little bit extra memory in order to speed things up is generally a good idea.

In your case however you know the exact size you need so you only need to reserve memory once.

#include <algorithm>using namespace std;int main(){	vector <int> myVec;	cout << "How many numbers would you like to add? ";	int numbers;	cin >> numbers;        myVec.reserve(numbers);	for (int i = 0; i < numbers; i++)	{		int num;		cin >> num;		myVec.push_back (num);	}	system ("PAUSE");	return 0;}

##### Share on other sites
Quote:
Original post by sheep19

Quote:
 Original post by jpetrieYour attempt to manually adjust the vector's allocated storage is most likely less efficient than simply doing nothing (e.g., never reserving at all).You always reserve one more than the current capacity, which means you are potentially reserving more memory every iteration of the loop. Most vector implementations double the reserved size when they grow, resulting in fewer actual allocations and copies.Just reserve() up front, though, it's the most ideal solution.

Why is it better not to do anything?

I am increasing only by one, so that I don't have useless capacity on the vector, and I reserve memory only when needed. (I've found that VC++ 2005 EE increases the capacity by 1.5 times) -> Isn't it better to have the same size as capacity, so less memory will be used? Please explain.

Resizing the array means:
- allocating new memory
- copying old data into new memory
- de-allocating (possibly also destroying) old memory

Doing nothing means:
-

Which would you rather have?

On i-th resize, you need to copy i elements. Doing this each time you add an element, to add n elements, you need to do the following:
Copy: 1 + 2 + 3 + 4 + 5 + 6 + ... + n elements
Allocate: n arrays
De-allocate: n-1 arrays

Doing nothing means you will perform exactly one allocation and never perform any copying.

The fastest operation is always the one not performed.

The cost of memory in your problem is a moot point - in the end, you will need to allocate enough memory to store n values anyway, so saving memory in the process doesn't really save anything.

Even worse: by constantly reallocating it, you're fragmenting the heap, something which leads to big problems with long-running applications.

##### Share on other sites
Quote:
 Original post by sheep19Why is it better not to do anything?

Because otherwise you defeat the purpose of using someone else's code, which is to save you time and effort optimizing something which is already optimized.

Quote:
 I am increasing only by one, so that I don't have useless capacity on the vector, and I reserve memory only when needed. (I've found that VC++ 2005 EE increases the capacity by 1.5 times) -> Isn't it better to have the same size as capacity, so less memory will be used? Please explain.

The capacity increase works that way for a good reason: reallocation is slow - especially because everything has to be *copied* into the new allocation (this problem is fundamental; there really isn't a way around it that also gives you a contiguous storage space). Note that this system keeps the amount of "wasted" memory bounded to a constant percentage, while minimizing the number of times a reallocation happens (O(lg N) in the number of elements) and the average number of times an object is copied when you iteratively .push_back .

The .reserve function is provided for situations where you know how many things you want to add; the system is designed to cover your behind when you don't. Basically, when you use .reserve, you say "make sure there is room for this many elements without reallocating" - but that may require that reallocation occurs immediately, so that there is room. If you reserve one more element each time, you simply say "reallocate if necessary to make sure there is room for one more element" - but .push_back() does that anyway, so you are duplicating the work.

Now, there are two options for how the reallocation works. One is that the implementation trusts you, and allocates only the minimum that you ask for. This means you will reallocate every time, and while you potentially save memory, you waste a lot of time. The other (actually, I'm not sure this one is legal) is that the implementation trusts its size-increase scheme, and for example, has some pre-computed table of "allowed allocation sizes" and picks the smallest one that accomodates your request. In that case, you still slow things down (because of redundant work) and can't save any memory, although the same number of reallocations occurs (because the implementation is triggering them on its own "schedule"). In neither case are you particularly better off.

So, how can it be useful at all? Simple: do one reserve up front, when you do know how many things you want.

When you don't know how many things you want, you can still (attempt to?) save memory after doing your push_backs, at the (time) expense of doing one more reallocation:

myVec.swap(std::vector(myVec));

We initialize a new, temporary vector with the same contents as the existing one (but because it's being created from scratch, the implementation now knows how much memory is needed, and may? will? only allocate that much in creating the new vector), and swap the two vectors (so their contents basically switch places). Finally, the temporary (which now "owns" the needlessly-large allocation) is implicitly destructed (cleaning up that memory), and we're left with basically the same thing (but hopefully in a "trimmed" memory space).