STL Vectors VS. Standard Arrays

Started by
25 comments, last by MaulingMonkey 19 years, 7 months ago
Quote:Original post by Max_Payne
The std::vector<> is fine for many purposes. But unless you have a reason to change the number of elements, I would simply allocate an array dynamically. I only use std::vector<> when the amount of elements to be stored is either unpredictable, or can change during runtime. If you were to store a fixed number of vertices for a model, however, you really just should dynamically allocate an array.

Which in turn has no advantages over std::vector if you use its reserve() method to pre-allocate the required memory. Plus you don't have to worry about releasing the memory and have the option of resizing it.
Advertisement
Quote:Original post by darookie
Quote:Original post by Max_Payne
The std::vector<> is fine for many purposes. But unless you have a reason to change the number of elements, I would simply allocate an array dynamically. I only use std::vector<> when the amount of elements to be stored is either unpredictable, or can change during runtime. If you were to store a fixed number of vertices for a model, however, you really just should dynamically allocate an array.

Which in turn has no advantages over std::vector if you use its reserve() method to pre-allocate the required memory. Plus you don't have to worry about releasing the memory and have the option of resizing it.


It has a few advantages. Namely, less dependencies, possibly faster compile time, potentiually less memory use in your actual objects (you don't know what std::vector<> might contain). Its also a matter of preference. No reason to use an std::vector<> where its not needed.

Looking for a serious game project?
www.xgameproject.com
Quote:Original post by Max_Payne
potentiually less memory use in your actual objects (you don't know what std::vector<> might contain).


actually you do know what vector and the other containers contain on the memory side, the default standard allocator type that you can always change (with no cost) to a custom allocator that uses a different potentially more efficent & faster memory model than plain over generic new/delete.

i wouldn't use a vector over a statically allocated arrays maybe even then, i would always use a vector over dynamically allocated C-style arrays they are better for this and its not a preference thing.
Quote:Original post by snk_kid
actually you do know what vector and the other containers contain on the memory side, the default standard allocator type that you can always change (with no cost) to a custom allocator that uses a different potentially more efficent & faster memory model than plain over generic new/delete.

I think he was referring to the fact that the vector could allocate more memory than a simple dynamic array, since the standard doesn't specifiy how the data must be stored (it only specifies the interface and the complexity of the algorithms used IIRC). Having said that the memory overhead would only be marginal in the majority of cases.
"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a bygone vexation stands vivified, and has vowed to vanquish these venal and virulent vermin vanguarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V.".....V
Quote:Original post by joanusdmentia
And as an addendum, if your doing enough inserts, etc. for it to be a problem then you should be using a container more appropriate to your needs (eg std::list).


std::vector is almost always faster in practice, even for operations that theory tells us list are faster for. List allocate and transverse much more memory-space than vectors do. This is the "locality of reference" rule you might have heard about.


Have a look at boost::array - static array semantics, std::vector interface.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
Quote:Original post by Max_Payne
It has a few advantages. Namely, less dependencies, possibly faster compile time, potentiually less memory use in your actual objects (you don't know what std::vector<> might contain). Its also a matter of preference. No reason to use an std::vector<> where its not needed.


This results in code that has an exploitable buffer - for example all those security fixes for IE.
If the array isn't constant, use std:vector or write buggy code.

std::vector has a four-pointer size overhead, typically 16 bytes.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
Quote:Original post by psyjax
Are STL Vectors slower than using standard arrays?

Ever since I found out about the vector container I have begun using them inplace of regular arrays. Is this wise in all cases?

I am just wondering because I have been using them to itterate thrugh vertexes in a 3D engine of mine. I want to know if this has any speed penalties.


The answer is "it depends". Creating a "normal" P.O.D. array on the stack is essentially "free". If you start creating vectors of objects in the same way, you may think it is basically the same situation, but using a vector this way is not "free". The vector has to allocate memory and check to see if it needs to grow every time you add something.

I'll give you an example. I have written many chess programs and other board game playing programs. One thing I need to do is generate a list of legal moves. This happens about one or two million times per second as the computer "thinks" about what move to play (searching a game tree). If I use a normal array to store the legal moves at each node of the game tree, that is a few million things for "free". If I replace the array with a vector, now I'm creating and destroying a vector a few million times per second and allocating/reallocating a few million times per second. When you go from "free" to "not free", there will be overhead. If you have a good STL implementation, a good compiler, and are willing to potentially create your own custom allocators or vector pools, you may be able to make it work with no noticable overhead.

The main question is: Can you avoid frequent allocation/reallocation in time critical parts of your code?

You can accomplish that if you either do your allocation when performance doesn't matter (like during a level change, or while my chess opponent is thinking about his own move), or by simply not reallocating (if you know you never have more than 2000 vertices, call reserve(2048) and it is likely you will never need to reallocate, and only once or twice if it happens).

Also beware of sub-par STL implementations and compilers that can't optimize away the abstraction. All of the memory pools or other cleverness in the world isn't going to produce fast code if the push_back() implementation does something stupidly slow.

And in the end, if you want to know for sure, the unfortunate answer is (like always)... "try it and see"
Umm, why are you creating/destroying your vector (with the corresponding allocation/deallocation) every time? Isn't there a .clear() method that will reset the size to 0 (while keeping the capacity what it was, i.e. holding on to the same allocation, and just considering all the old values to no longer be part of the vector)?

Anyway, chances are that any given STL implementation is going to be better than any given implementation that you or I would write. If it weren't, then you or I would have a job implementing the STL.
Quote:Original post by Russell
The answer is "it depends". Creating a "normal" P.O.D. array on the stack is essentially "free". If you start creating vectors of objects in the same way, you may think it is basically the same situation, but using a vector this way is not "free". The vector has to allocate memory and check to see if it needs to grow every time you add something.


Apples and Oranges.

You're not going to be using a stack array in this way and if you don't use the vector in this way the performance will be the same.

Use containers in appropriate ways and they are as efficient as you'll need.
Quote:Original post by Zahlman
Isn't there a .clear() method that will reset the size to 0 (while keeping the capacity what it was, i.e. holding on to the same allocation, and just considering all the old values to no longer be part of the vector)?

Yes, the dreaded clear, it took me my first month of using STL to figure out that clear() did not release the memory ;)

Quote:Original post by Zahlman
Anyway, chances are that any given STL implementation is going to be better than any given implementation that you or I would write. If it weren't, then you or I would have a job implementing the STL.

Yes and No, STL is implemented primarily for reliability (i.e. extensive exception checking), and I'm afraid that speed sometimes suffers. If you write a simple list or vector implementation, with just the basic functions, you can reliably find 10-15% speed gains. However, it doesn't supply those convienient std::out_of_bounds exceptions :(

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement