• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Hawkblood

Linked List vs std::vector

14 posts in this topic

What is it about std::vector that makes it so much faster than a linked list? This question was asked by someone else in an earlier post but not stated quite like this and I don't think a solution/reason was actually concluded. Here is what I used to test this out:

 

The declaration:

vector <int> testingVector;
struct LLTEST{
	LLTEST(){last=start=NULL;}
	struct LINKLIST{
		LINKLIST(int v,LINKLIST*p){val=v;next=NULL;if (p!=NULL) p->next=this;}
		LINKLIST *prev,*next;
		int val;
		void Destroy(void){
			if (next!=NULL) next->Destroy();
			delete this;
		}
	};
	LINKLIST *start,*last;
	void Append(int val);
	void LLDestroy(void);
}LLtest;
void LLTEST::Append(int val){
	if (last==NULL){//it's the first one
		start=new LINKLIST(val,NULL);
		last=start;
	}
	else{
		LINKLIST *tmp=new LINKLIST(val,last);
		last=tmp;
	}
}
void LLTEST::LLDestroy(void){
	start->Destroy();
}

first test with Linked List:

	GE->Timer.Update();
	__int64 ct=GE->Timer.TotalTime;

	for (int q=0;q<100000;q++){
		LLtest.Append(q);
	}
	LLtest.LLDestroy();
	GE->Timer.Update();
	tp1=float(GE->Timer.TotalTime-ct);

tp1 gives me roughly 26500 in total time.

 

Now for the std::vector:

	GE->Timer.Update();
	__int64 ct=GE->Timer.TotalTime;
	for (int q=0;q<100000;q++){
		testingVector.push_back(q);
	}
	testingVector.clear();

	GE->Timer.Update();
	tp1=float(GE->Timer.TotalTime-ct);

tp1 gives me around 2700.... That's an order of magnitude 10 difference!!!! What in my LL is wrong enough to cause this much of a difference?

 

0

Share this post


Link to post
Share on other sites

So what I got out of the replies is "NEVER USE LINKED LISTS". I can't see any reason to use them with the given information of your replies. I was initially worried about the vector resizing each time I add new data, but I can just use vector::resize(...) to initialize it if I know I'm going to have a lot of elements.....

0

Share this post


Link to post
Share on other sites


So what I got out of the replies is "NEVER USE LINKED LISTS". I can't see any reason to use them with the given information of your replies.

There are advantages in specific scenarios. E.g. an iterator to a list element does not get invalid when other elements are inserted or removed. Or, as rip-off said, elements of lists do not get copied / moved, when other elements are inserted or deleted.

0

Share this post


Link to post
Share on other sites
The advantages should be considered in light of implementation complexity, not performance. A naive linked list will basically never outperform a cache-coherent data structure. However, if iterator invalidation poses problems for you, or excessive copying on insert/remove is a problem, they can be useful.

As far as over-copying: profile first, and profile after using a list. Never assume that one solution is faster than another without testing it.
2

Share this post


Link to post
Share on other sites

How about something more specific-- a particle system where particles are created and destroyed on a regular basis. I think performance would trump any other concerns, right?

0

Share this post


Link to post
Share on other sites
So? There are many systems where performance is more important than other factors. The existence of such systems in no way changes the answers provided in this thread.
1

Share this post


Link to post
Share on other sites

Linked lists have applications, for instance in kernels, where it may be physically impossible to get all the list items in a nice preallocated array in memory (for instance, in interrupt handlers, where the relevant pieces of data may be strewn across arbitrarily many address spaces throughout various processes and threads). In that case a linked list or other indirect data structure may be the only option available. But if you have the ability to modify your data layout so that it is laid out contiguously and coherently in memory, you will see really big gains using an array data structure because of cache coherency, the memory access pattern is predictable and that is what CPU's are good at, rather than fetching random bits of memory scattered all over the place. In most cases you can do that, so that is why linked lists are not too popular, they have a rather large constant factor that typically makes them worse than simple arrays despite any theoretical advantages they may have, and that constant factor is growing over time as memory becomes slower and slower compared to CPU speed.

1

Share this post


Link to post
Share on other sites

Thank you for your responses. I'm not so advanced that I understand all the terms given, but I have very specific reasons for using a list/vector. I am wanting to store the data in as fast a system as possible for objects like particle systems.

0

Share this post


Link to post
Share on other sites
Simplest rule really is to prefer an array based collection like vector unless you gain some practical purpose from a list, although that really goes down to the old standby of "guessing isn't profiling" and you shouldn't really be worried about speed overtly in cases that seem sensible unless it shows to be a bottleneck.

Kinda goes down to the entire point of performance tuning code, there are a billion optimizations you can make to code but making a few small changes may yield more performance gains than a thousand small ones, which is why profiling is important.

But in regards to the basic question, it mainly comes down to the cache. Due to physical limitations with CPU design, cache can be expected to slowly become even more key in the future much as adding more cores became a better improvement a few years ago. Cache can make things that sound much slower in theory, be much faster in practicality.
0

Share this post


Link to post
Share on other sites

So what I got out of the replies is "NEVER USE LINKED LISTS". I can't see any reason to use them with the given information of your replies. I was initially worried about the vector resizing each time I add new data, but I can just use vector::resize(...) to initialize it if I know I'm going to have a lot of elements.....

The advantage of a linked list on a modern pc is not performance, but for the ability for iterators and addresses to remain valid when the container changes. So yes, you should prefer a dynamic array when you can, but there are cases when using a list is simpler. In some cases you can use a vector with a reserved size, but that's not such a good solution when the typical case has only a few elements. Note, if you just need addresses to remain valid on resize, a deque may be more suitable.

0

Share this post


Link to post
Share on other sites


http://channel9.msdn.com/Events/Build/2014/2-661

 

I watched this earlier this week.  The stuff he talked about prefetching is pretty interesting, especially in regards to insertion/deletion and accessing a array/vector (contiguous memory) vs a link-list (random memory).

 

In the case of an insert into a link list, you have to traverse the list until you find the node you wish to insert after, Create the new node, then update the "bookkeeping" to the existing nodes.  Since the element locations are random, plus have the bookkeeping overhead, they don't cache well.

 

In the case of random inserts into an array/vector, you still have to traverse the list until you find where you want to insert the new element.  The difference this time is that the processors prefetcher can pretty accurately guess that you are going to be accessing the next memory locations and try to have the elements available in the processors caches, which speeds up locating the insertion point.  Insertion still requires copying of all of the elements after the new location, but since this again is a linear operation, the prefetcher can make this run very fast in the processor cache.

 

This is not a guarantee for all situations, but in cases where very linear operations are taking place, the array/vector is going to win almost every time.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0