Sign in to follow this  
hotdogsayhi

The best dynamic array?(C++)

Recommended Posts

std::<vector> is much slower than normal dynamic array(which created by "new" method) when it used to handle large amount of data. But normal dynamic array can not change its size and retain its data quickly and efficiency as vector. Do there exist other type of array which is better than them? Thankz

Share this post


Link to post
Share on other sites
Quote:
Original post by hotdogsayhi
std::<vector> is much slower than normal dynamic array(which created by "new" method) when it used to handle large amount of data.
No, it isn't. If you have found it to be so, you are most likely using it wrong. In what situations have you found it to be slower? Are you testing with full optimizations? If you are using Visual Studio 2005 or later, are you disabling the Secure SCL and iterator debugging?

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
Quote:
Original post by hotdogsayhi
std::<vector> is much slower than normal dynamic array(which created by "new" method) when it used to handle large amount of data.
No, it isn't. If you have found it to be so, you are most likely using it wrong. In what situations have you found it to be slower? Are you testing with full optimizations? If you are using Visual Studio 2005 or later, are you disabling the Secure SCL and iterator debugging?

I push 100,000 integer in vector
std::vector<int> a;

for (int i;i<a.size();i++)
{
//process each a[i]
}

int *b=new int[b_size];
for (int i;i<b_size;i++)
{
//process each b[i]
}

i find b is faster than a.

Share this post


Link to post
Share on other sites
Don't we go through this about once a week with always same results caused by exactly the same problems (benchmark code is incorrect, debug iterators are enabled and profiling is done in debug mode).

I know I answered the exactly same question just a few days back, showing the whole process and why/where delays accumulate. Also, vector is exactly as fast as arrays.

Edit: Effects of MVCS debugging features on performance, scaffolding for reliable benchmarks.

Share this post


Link to post
Share on other sites
In Game Programming Gems 4 there is an article on how to make ordinary arrays dynamic without copying around data (Windows only). This involves messing around with the address space so it is potentially dangerous. However, it might be worth it in certain situations.

Has anybody tried this or a similar approach yet? Any results?

I'll eventually try it on windows and if it's really worth it I'll look for a linux equivalent.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Don't we go through this about once a week with always same results caused by exactly the same problems (benchmark code is incorrect, debug iterators are enabled and profiling is done in debug mode).


Indeed. Could an entry be added to the FAQ. Or perhaps a wiki page would be better?

Share this post


Link to post
Share on other sites
Quote:
Original post by hotdogsayhi
I push 100,000 integer in vector
std::vector<int> a;

for (int i;i<a.size();i++)
{
//process each a[i]
}

int *b=new int[b_size];
for (int i;i<b_size;i++)
{
//process each b[i]
}

i find b is faster than a.

Um... What?
Try this code instead:
[source=c++]
std::vector<int> a;

a.reserve(100000); //Here!! Very important
//You can use iterator instead of const_iterator
std::vector<int>::const_iterator itor = a.begin();
std::vector<int>::const_iterator end = a.end();

while( itor < end )
{
//Process each itor
++itor
}

int *b=new int[b_size];
for (int i;i<b_size;i++)
{
//process each b[i]
}


Also compile in relase mode. Not debug. I'm pretty sure it will take the same time.
STL has to be used in an efficient manner to produce efficient results.

Cheers
Dark Sylinc

Share this post


Link to post
Share on other sites
Yeah, exactly what Matias said.

When you create a new array with a value of 10,000, those 10,000 blocks of memory are allocated up front. When dealing with std::vectors, if you add one at time, you're going to be constantly reallocating memory until you hit your 10,000 (not necessarily every added term, but definitely more than once).

If you're having performance issues with STL related usage, you're more than likely using it wrong.

Share this post


Link to post
Share on other sites
The assembly code generated will be exactly the same whether you use std::vector or not.

About copying, it is fairly easy to do a std::vector that doesn't require it. The new C++ standard does it.

Share this post


Link to post
Share on other sites
Here is a rule of thumb: if you are not capable of proving formally why X should be slower than Y, including accounting for common compiler optimizations, and fixing the problem, then you should also not consider yourself capable of:

- writing a benchmark to demonstrate the relative speed of X and Y
- making any valid assertions about the relative speed of X and Y
- safely concluding that X is, actually, slower than Y, even though it "seems" to be.

Optimization, beyond the big-O level, is for experts.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this