Sign in to follow this  

STL Vectors VS. Standard Arrays

This topic is 4841 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Are STL Vectors slower than using standard arrays? Ever since I found out about the vector container I have begun using them inplace of regular arrays. Is this wise in all cases? I am just wondering because I have been using them to itterate thrugh vertexes in a 3D engine of mine. I want to know if this has any speed penalties. thanx

Share this post


Link to post
Share on other sites
The only speed penalties that I know of will be when you insert things at the beginning or in the middle of the vector, when you resize the vector (and cause it to have to reallocate space, which doesn't happen every time), or when you abuse passing vectors around as parameters, or returning them from functions, causing the vectors to have to be copied an unnecessary number of times (using references helps with this immensely). But once a vector is allocated, and once things are in them, then random access of its elements is just as quick as with an array (unless you use .at(), which adds a small overhead to make sure you're not accessing invalid elements).

Share this post


Link to post
Share on other sites
From what I understand there is speed penalties if you randomly access something in the vector [as opposed to iterating through it].

But, as always, if you're not an expert programmer, you're likely going to write slow enough code that even those penalties won't matter much. Make it work, then worry about making it fast.

Share this post


Link to post
Share on other sites
Umm, random access is going to be as fast in release mode as it will end up as just a pointer addition (same as it is for regular arrays). The only exception to this is using the at method, which will have a slightly slower access time, as it does bounds checking.

Share this post


Link to post
Share on other sites
Quote:
Original post by Washu
Umm, random access is going to be as fast in release mode as it will end up as just a pointer addition (same as it is for regular arrays). The only exception to this is using the at method, which will have a slightly slower access time, as it does bounds checking.


Ah, thank you for the correction.

Share this post


Link to post
Share on other sites
Quote:
Original post by Agony
The only speed penalties that I know of will be when you insert things at the beginning or in the middle of the vector, when you resize the vector (and cause it to have to reallocate space, which doesn't happen every time), or when you abuse passing vectors around as parameters, or returning them from functions, causing the vectors to have to be copied an unnecessary number of times (using references helps with this immensely). But once a vector is allocated, and once things are in them, then random access of its elements is just as quick as with an array (unless you use .at(), which adds a small overhead to make sure you're not accessing invalid elements).

Since this thread compares vectors with raw arrays, this might be a good place to point out that arrays offer no functionality at all for inserting items at the beginning or in the middle, let alone resizing, and in order to do any of this, the programmer is going to have to do things very similar to what the STL maintainers are doing.

Share this post


Link to post
Share on other sites
really the only expense is the malloc'ing of new memory. If you want an STL compatable static array with bounds checking (optional), check out www.boost.org's array template class.

Share this post


Link to post
Share on other sites
The std::vector<> is fine for many purposes. But unless you have a reason to change the number of elements, I would simply allocate an array dynamically. I only use std::vector<> when the amount of elements to be stored is either unpredictable, or can change during runtime. If you were to store a fixed number of vertices for a model, however, you really just should dynamically allocate an array.

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
The std::vector<> is fine for many purposes. But unless you have a reason to change the number of elements, I would simply allocate an array dynamically. I only use std::vector<> when the amount of elements to be stored is either unpredictable, or can change during runtime. If you were to store a fixed number of vertices for a model, however, you really just should dynamically allocate an array.

Which in turn has no advantages over std::vector if you use its reserve() method to pre-allocate the required memory. Plus you don't have to worry about releasing the memory and have the option of resizing it.

Share this post


Link to post
Share on other sites
Quote:
Original post by darookie
Quote:
Original post by Max_Payne
The std::vector<> is fine for many purposes. But unless you have a reason to change the number of elements, I would simply allocate an array dynamically. I only use std::vector<> when the amount of elements to be stored is either unpredictable, or can change during runtime. If you were to store a fixed number of vertices for a model, however, you really just should dynamically allocate an array.

Which in turn has no advantages over std::vector if you use its reserve() method to pre-allocate the required memory. Plus you don't have to worry about releasing the memory and have the option of resizing it.


It has a few advantages. Namely, less dependencies, possibly faster compile time, potentiually less memory use in your actual objects (you don't know what std::vector<> might contain). Its also a matter of preference. No reason to use an std::vector<> where its not needed.

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
potentiually less memory use in your actual objects (you don't know what std::vector<> might contain).


actually you do know what vector and the other containers contain on the memory side, the default standard allocator type that you can always change (with no cost) to a custom allocator that uses a different potentially more efficent & faster memory model than plain over generic new/delete.

i wouldn't use a vector over a statically allocated arrays maybe even then, i would always use a vector over dynamically allocated C-style arrays they are better for this and its not a preference thing.

Share this post


Link to post
Share on other sites
Quote:
Original post by snk_kid
actually you do know what vector and the other containers contain on the memory side, the default standard allocator type that you can always change (with no cost) to a custom allocator that uses a different potentially more efficent & faster memory model than plain over generic new/delete.

I think he was referring to the fact that the vector could allocate more memory than a simple dynamic array, since the standard doesn't specifiy how the data must be stored (it only specifies the interface and the complexity of the algorithms used IIRC). Having said that the memory overhead would only be marginal in the majority of cases.

Share this post


Link to post
Share on other sites
Quote:
Original post by joanusdmentia
And as an addendum, if your doing enough inserts, etc. for it to be a problem then you should be using a container more appropriate to your needs (eg std::list).


std::vector is almost always faster in practice, even for operations that theory tells us list are faster for. List allocate and transverse much more memory-space than vectors do. This is the "locality of reference" rule you might have heard about.


Have a look at boost::array - static array semantics, std::vector interface.

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
It has a few advantages. Namely, less dependencies, possibly faster compile time, potentiually less memory use in your actual objects (you don't know what std::vector<> might contain). Its also a matter of preference. No reason to use an std::vector<> where its not needed.


This results in code that has an exploitable buffer - for example all those security fixes for IE.
If the array isn't constant, use std:vector or write buggy code.

std::vector has a four-pointer size overhead, typically 16 bytes.

Share this post


Link to post
Share on other sites
Quote:
Original post by psyjax
Are STL Vectors slower than using standard arrays?

Ever since I found out about the vector container I have begun using them inplace of regular arrays. Is this wise in all cases?

I am just wondering because I have been using them to itterate thrugh vertexes in a 3D engine of mine. I want to know if this has any speed penalties.


The answer is "it depends". Creating a "normal" P.O.D. array on the stack is essentially "free". If you start creating vectors of objects in the same way, you may think it is basically the same situation, but using a vector this way is not "free". The vector has to allocate memory and check to see if it needs to grow every time you add something.

I'll give you an example. I have written many chess programs and other board game playing programs. One thing I need to do is generate a list of legal moves. This happens about one or two million times per second as the computer "thinks" about what move to play (searching a game tree). If I use a normal array to store the legal moves at each node of the game tree, that is a few million things for "free". If I replace the array with a vector, now I'm creating and destroying a vector a few million times per second and allocating/reallocating a few million times per second. When you go from "free" to "not free", there will be overhead. If you have a good STL implementation, a good compiler, and are willing to potentially create your own custom allocators or vector pools, you may be able to make it work with no noticable overhead.

The main question is: Can you avoid frequent allocation/reallocation in time critical parts of your code?

You can accomplish that if you either do your allocation when performance doesn't matter (like during a level change, or while my chess opponent is thinking about his own move), or by simply not reallocating (if you know you never have more than 2000 vertices, call reserve(2048) and it is likely you will never need to reallocate, and only once or twice if it happens).

Also beware of sub-par STL implementations and compilers that can't optimize away the abstraction. All of the memory pools or other cleverness in the world isn't going to produce fast code if the push_back() implementation does something stupidly slow.

And in the end, if you want to know for sure, the unfortunate answer is (like always)... "try it and see"

Share this post


Link to post
Share on other sites
Umm, why are you creating/destroying your vector (with the corresponding allocation/deallocation) every time? Isn't there a .clear() method that will reset the size to 0 (while keeping the capacity what it was, i.e. holding on to the same allocation, and just considering all the old values to no longer be part of the vector)?

Anyway, chances are that any given STL implementation is going to be better than any given implementation that you or I would write. If it weren't, then you or I would have a job implementing the STL.

Share this post


Link to post
Share on other sites
Quote:
Original post by Russell
The answer is "it depends". Creating a "normal" P.O.D. array on the stack is essentially "free". If you start creating vectors of objects in the same way, you may think it is basically the same situation, but using a vector this way is not "free". The vector has to allocate memory and check to see if it needs to grow every time you add something.


Apples and Oranges.

You're not going to be using a stack array in this way and if you don't use the vector in this way the performance will be the same.

Use containers in appropriate ways and they are as efficient as you'll need.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zahlman
Isn't there a .clear() method that will reset the size to 0 (while keeping the capacity what it was, i.e. holding on to the same allocation, and just considering all the old values to no longer be part of the vector)?

Yes, the dreaded clear, it took me my first month of using STL to figure out that clear() did not release the memory ;)

Quote:
Original post by Zahlman
Anyway, chances are that any given STL implementation is going to be better than any given implementation that you or I would write. If it weren't, then you or I would have a job implementing the STL.

Yes and No, STL is implemented primarily for reliability (i.e. extensive exception checking), and I'm afraid that speed sometimes suffers. If you write a simple list or vector implementation, with just the basic functions, you can reliably find 10-15% speed gains. However, it doesn't supply those convienient std::out_of_bounds exceptions :(

Share this post


Link to post
Share on other sites
Quote:
Original post by petewood
Quote:
Original post by Russell
The answer is "it depends". Creating a "normal" P.O.D. array on the stack is essentially "free". If you start creating vectors of objects in the same way, you may think it is basically the same situation, but using a vector this way is not "free". The vector has to allocate memory and check to see if it needs to grow every time you add something.


Apples and Oranges.

You're not going to be using a stack array in this way and if you don't use the vector in this way the performance will be the same.

Use containers in appropriate ways and they are as efficient as you'll need.


I strongly disagree.
Sometimes you do want stack-based arrays for performance (and not putting too much stress on the allocator).
That's why boost::array exists to extend the STL.

Share this post


Link to post
Share on other sites
Old test code,adapted to test vectors and arrays.

// test
#include <iostream>
#include <fstream>
#include <cmath>
#include <vector>
#include <assert.h>


typedef __int64 INT64;
typedef unsigned __int64 UINT64;
typedef __int32 INT32;
typedef unsigned __int32 UINT32;

static const double pi=(3.1415926535897932384626433832795);

volatile INT64 frdtsc(){
UINT32 resultL,resultH;
__asm{
push eax
push edx
rdtsc
mov resultL,eax
mov resultH,edx
pop edx
pop eax
};
return resultL+ (((INT64)(resultH))<<32) ;
};
UINT32 rseed=94564567;
inline float random()
{
rseed *= 0x08088405;
rseed += 1 ;
return (float)((UINT32)(rseed))/((float)((UINT32)0xFFFFFFFF));
}
static const int iterations=40*1000*1000;
#define test(a) { INT64 _startclocks_; INT64 _endclocks_; for(int _n_=0; _n_<iterations ;_n_++){a;} _startclocks_=frdtsc(); for(_n_=0; _n_<iterations ;_n_++){a;} _endclocks_=frdtsc(); cout<<( ((double)(_endclocks_-_startclocks_))/((double)(iterations)) )<<endl;}
float SIN_TABLE[256];
inline float SinUsingTable(float a){
return SIN_TABLE[255 & int(a*128.0/pi)];
};
/* array/vector testing */
using namespace std;
const int size2=14;
const int size=1<<size2;
vector<float> vec;
float array[size];
void resetarrays(){
for(int n=0;n<size;n++){array[n]=0.1f;vec.at(n)=0.1f;};
}

/* -- array/vector testing */


int main () {
vec.resize(size,0.1f);

{for(int n=0;n<=255;n++){
SIN_TABLE[n]=sin(n*pi/128.0);
}}

/*
rseed^=frdtsc();//randomize
rseed^=frdtsc()<<16;
*/


/*
float a;

a=0;
cout<<"testing empty:"<<endl;
test( {a+=20*random();} );
cout<<"I have to write to fool the optimizer:"<<a<<endl;

a=0;
cout<<"testing sin:"<<endl;
test( {a+=sin(20*random());} );
cout<<"I have to write to fool the optimizer:"<<a<<endl;

a=0;
cout<<"testing cos:"<<endl;
test( {a+=cos(20*random());} );
cout<<"I have to write to fool the optimizer:"<<a<<endl;

a=0;
cout<<"testing inverse"<<endl;
test( {a+=1/(20*random());} );
cout<<"I have to write to fool the optimizer:"<<a<<endl;

a=0;
cout<<"testing square root"<<endl;
test( {a+=sqrt(20*random());} );
cout<<"I have to write to fool the optimizer:"<<a<<endl;

a=0;
cout<<"testing sin made using lookup table"<<endl;
test( {a+=SinUsingTable(20*random());} );
cout<<"I have to write to fool the optimizer:"<<a<<endl;
*/


int n=0;

n=0;
resetarrays();
cout<<"testing array[]"<<endl;
test( {array[n]+=0.5-random();n=(n+1)&(size-1);} );

n=0;
//resetarrays();
cout<<"testing vector[]"<<endl;
test( {vec[n]+=0.5-random();n=(n+1)&(size-1);} );

n=0;
//resetarrays();
cout<<"testing vector.at"<<endl;
test( {vec.at(n)+=0.5-random();n=(n+1)&(size-1);} );

/* */

// fool advalced optimizers
float b;
for(n=0;n<size;n++){
b+=b*(vec.at(n))*0.0012345;
b+=b*(array[n])*0.001;
b+=array[n];
b+=vec.at(n);
b+=n;
};
cout<<"I have to write that to fool the optimizer:"<<b<<endl;

cout<<"Finished."<<endl;
char c;
cin>>c;
return 0;
}






output:

testing array[]
139.489
testing vector[]
130.137
testing vector.at
126.827
I have to write that to fool the optimizer:-9.5376e+028
Finished.
result 2 (changed order of tests)
testing array[]
139.1
testing vector[]
130.132
testing vector.at
126.984
I have to write that to fool the optimizer:-1.07918e+013
Finished.

i haven't believed my code and tried moving array to the end of testing sequency,then to the middle, etc.(thought it random generator). Same result.
Strange. Really strange. Must be odd pipeline issue or caching-related thing.

edit: As this shows, when you add one more instruction, speed changes in so odd way that it doesn't matter, and that instruction _may_ decrease speed, but as well may increase, and second happens often enough.

and yes, vector have slower features such as resizing but you just can't resize normal array.
edit2: in minutes after posting it i was rated by -15. But it's real test results i got! No kidding!

[Edited by - Dmytry on September 16, 2004 5:22:21 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Yes and No, STL is implemented primarily for reliability


no it isn't just that.

Quote:
Original post by swiftcoder
(i.e. extensive exception checking)


Where the only operation i think of that throws an exception is vector's at method, STL containers are written with exception safe code.

Quote:
Original post by swiftcoder
, and I'm afraid that speed sometimes suffers.



That would be your user-defined type's fault not STL containers, blame it on your written or implicitly defined copy constructor's so slow initialization for that.

Also blame it on irresponsible usage & lack of thought.

Quote:
Original post by swiftcoder
If you write a simple list or vector implementation, with just the basic functions, you can reliably find 10-15% speed gains.


yes gain 10-15% speed gains for schematically incorrect/silly behaviour.

Lets make a scenario shall we, lets say we write a hand made list we have an operation that lets you add elements to the list we decide to try and be more efficent so we store pointers to the instance.

Okay fine we begin to use this list in some function some where we add an element that was declared in the scope of that function, opps you have dangling pointer syndrome when the function's scope is ended, another thing is you cannot use this operation on literals.

Okay we make some changes the list now passes by constant reference and then copy constructs the instance on the heap and adds it to the list this deals with both isssues. Oh Wait a minute thats what standard library containers do already oh well....

And before anyone says that they would use something else other than the generic memory operators new/delete or a different memory model well that is an invalid arguement because you can use custom allocators.

Quote:
Original post by swiftcoder
However, it doesn't supply those convienient std::out_of_bounds exceptions :(


only method i can think that does that is the "at" method of vector.

Share this post


Link to post
Share on other sites
Quote:
Original post by Tramboi
I strongly disagree.
Sometimes you do want stack-based arrays for performance (and not putting too much stress on the allocator).
That's why boost::array exists to extend the STL.


What are you disagreeing with?

Russell said the vector has to check to see if it needs to grow which, if you were using it like your c-style array, isn't true.

If you're disagreeing with my statement that "they are as efficient as you'll need" then that's a different point.

Share this post


Link to post
Share on other sites
Quote:
Original post by Magmai Kai Holmlor
This results in code that has an exploitable buffer - for example all those security fixes for IE.
If the array isn't constant, use std:vector or write buggy code.


Thats totally irrelevant. IE had flaws because of poor, lazy coding. The errors in there would have been easily avoided by any competent programmer. Stupid mistakes like not doing bound checking at all. On that note, you're going to need to perform bound checking with std::vector<> as well... I don't use std::vector<> everywhere, I use C strings, and my code has very few bugs (and they are rarely related to those things).

Anyhow, did you learn to program with Java or something? You seem to have the "the language should think for you" mentality.

What makes a competent programmer is mostly the seriousness. If you simply don't care about writing safe code, your code will have bugs, no matter what language and programming techniques you use.

Share this post


Link to post
Share on other sites

This topic is 4841 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this