Advertisement Jump to content
Sign in to follow this  
Hawkblood

Linked List vs std::vector

This topic is 1748 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

What is it about std::vector that makes it so much faster than a linked list? This question was asked by someone else in an earlier post but not stated quite like this and I don't think a solution/reason was actually concluded. Here is what I used to test this out:

 

The declaration:

vector <int> testingVector;
struct LLTEST{
	LLTEST(){last=start=NULL;}
	struct LINKLIST{
		LINKLIST(int v,LINKLIST*p){val=v;next=NULL;if (p!=NULL) p->next=this;}
		LINKLIST *prev,*next;
		int val;
		void Destroy(void){
			if (next!=NULL) next->Destroy();
			delete this;
		}
	};
	LINKLIST *start,*last;
	void Append(int val);
	void LLDestroy(void);
}LLtest;
void LLTEST::Append(int val){
	if (last==NULL){//it's the first one
		start=new LINKLIST(val,NULL);
		last=start;
	}
	else{
		LINKLIST *tmp=new LINKLIST(val,last);
		last=tmp;
	}
}
void LLTEST::LLDestroy(void){
	start->Destroy();
}

first test with Linked List:

	GE->Timer.Update();
	__int64 ct=GE->Timer.TotalTime;

	for (int q=0;q<100000;q++){
		LLtest.Append(q);
	}
	LLtest.LLDestroy();
	GE->Timer.Update();
	tp1=float(GE->Timer.TotalTime-ct);

tp1 gives me roughly 26500 in total time.

 

Now for the std::vector:

	GE->Timer.Update();
	__int64 ct=GE->Timer.TotalTime;
	for (int q=0;q<100000;q++){
		testingVector.push_back(q);
	}
	testingVector.clear();

	GE->Timer.Update();
	tp1=float(GE->Timer.TotalTime-ct);

tp1 gives me around 2700.... That's an order of magnitude 10 difference!!!! What in my LL is wrong enough to cause this much of a difference?

 

Share this post


Link to post
Share on other sites
Advertisement

So what I got out of the replies is "NEVER USE LINKED LISTS". I can't see any reason to use them with the given information of your replies. I was initially worried about the vector resizing each time I add new data, but I can just use vector::resize(...) to initialize it if I know I'm going to have a lot of elements.....

Share this post


Link to post
Share on other sites


So what I got out of the replies is "NEVER USE LINKED LISTS". I can't see any reason to use them with the given information of your replies.

There are advantages in specific scenarios. E.g. an iterator to a list element does not get invalid when other elements are inserted or removed. Or, as rip-off said, elements of lists do not get copied / moved, when other elements are inserted or deleted.

Share this post


Link to post
Share on other sites
The advantages should be considered in light of implementation complexity, not performance. A naive linked list will basically never outperform a cache-coherent data structure. However, if iterator invalidation poses problems for you, or excessive copying on insert/remove is a problem, they can be useful.

As far as over-copying: profile first, and profile after using a list. Never assume that one solution is faster than another without testing it.

Share this post


Link to post
Share on other sites

How about something more specific-- a particle system where particles are created and destroyed on a regular basis. I think performance would trump any other concerns, right?

Share this post


Link to post
Share on other sites
So? There are many systems where performance is more important than other factors. The existence of such systems in no way changes the answers provided in this thread.

Share this post


Link to post
Share on other sites
I work as an engine consultant, which involves a lot of optimization work -- the first thing on my mind *isnt* "how can I make the CPU do this work faster", it's "how can I get this data to/from the CPU faster"!!
CPUs are incredibly fast these days, but memory is incredibly slow. In relative terms, e.g. Bytes moved per CPU clock, memory is actually getting slower and slower every year, so our focus is continually shifting away from counting CPU cycles and towards micro-optimizing the data layouts of our structures ;(

How about something more specific-- a particle system where particles are created and destroyed on a regular basis. I think performance would trump any other concerns, right?

for that kind of usage, a pool is common. You can allocate a large contiguous array of objects, but use a trick where when a object isn't in use, it is reinterpreted as a node in a linked list - called a free list.
To spawn a new particle/etc, you just remove the first node from the free list and cast it to your object type. To release an object, you cast it to a node and push it onto the front of the list.
When properly implemented, this is one of the fastest possible memory allocators around, plus it lets you run your system-update loop on a large contiguous array, whih is great for cache performance.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!