(A standard implementation of) dynamic arrays are usually pretty slow at inserting and/or removing elements from the middle of the list as all data has to be copied. Although they are really fast for iteration and lookup. Depending on your definition of a lot you might want to initialize the dynamic array so you don't end up doubling a dynamic array containing 20M elements in the middle of an update.
(A standard implementation of) lists are pretty fast at inserting and/or removing elements anywhere in the list. Although they are not as fast when it comes to iteration and slow (in comparison) to arrays when it comes to lookup.
Depending on how you're using the container you'll get different results, although it's impossible to tell from the description you gave. Although if you want to analyze the performance of your program based on the type of container used you can find the complexity of each operation at wikipedia, or you could simply change the type of container and time your functions.
Edit: been working too much with python lately, a standard implementation of arrays are of course fixed size.
Edited by Deprecated, 03 January 2012 - 09:03 PM.
Those who can do, do; those who can't do, do teach.
Don't use linked lists, regardless of how "groovy" they might appear, unless you have proof that they actually are what you want. For common usage patterns, a dynamic array will often have superior performance, even where one might not expect it due to the algorithmic complexity theory.
The algorithmic complexity theory is great, but it only makes sense for large N (typically, millions or more). For small N (like the thousands the OP is handling), the constants that are assumed by the theory to become irrelevant actually become more and more important - in some cases they can dominate. This is where you might often find the cache friendly vector outperforming linked lists for many operations on the kinds of data sets your program can handle.
Don't forget that a linked list implementation requires two algorithms, you have the basic linked list algorithm and the memory management algorithm. The latter is often ignored in highly theoretical discussions, but again it can make a massive difference.
Likewise, the dynamic array complexity ignores techniques like the "remove/erase idiom", which is often suitable for "bag" like data structures (append only, random remove). If the order is irrelevant, you can also use the "swap and pop" trick too.
Most games do not feature millions of sprites, because it would make the game unplayable. As such, you don't have to design the game to work with millions of sprites. If your game is working fine at the expected level of activity on the target hardware, you can stop. However, if you hadn't implemented, for example, collision and you went on to find that you aren't getting the required performance with either linked lists or vectors, you might need to reach for a more complex data structure, such as a spatial based one.