// first method: creation
int** my2darray = new int*[xsize];
for( unsigned int x=0; x<xsize; x++ )
my2darray = new int[ysize];
// first method: access
int val = my2darray[x][y];
// second method: creation
int* my2darray = new int[xsize*ysize];
// second method: access
int val = my2darray[y*xsize+x];
What is faster? [x][y] or [y*w+x]?
Hello members,
What technique would you use to create arrays with more than one dimension?
First method:
Which one is faster?
Have you also looked at the assembly output of your compiler in both cases? I would take a a look if I was you.
Quote:Original post by darookie
Have you profiled both? This will answer your question.
He wouldn't be asking if he had done so.
Quote:Original post by antareusIt's the polite way of telling him what he already should have done.Quote:Original post by darookie
Have you profiled both? This will answer your question.
He wouldn't be asking if he had done so.
Both versions are dangerous because they dont use RAII.
does
std::vector< std::vector<int> > my2darray(xsize, std::vector<int>(ysize));
does
In version 1, you dereference my2darray, then you dereference my2darray[x].
In version two, you dereference just once, my2darray[y*xsize+x].
So the access using method two should be twice as fast.
In version two, you dereference just once, my2darray[y*xsize+x].
So the access using method two should be twice as fast.
Neither. It depends on your architecture, and micro-optimizing at this level is really a waste of time unless you know that your code *has* to run in 25 microseconds on an i386EX (just for example).
Generally:
bob[x][y] ~= bob[x+y*stride]
(depends on the relative speeds of int multiplication and memory access, ie. mult is faster on high end CPUs and extra memory access probably faster on slower (pentium and below) chips)
Faster yet is:
bob[x+(y<<shift)]
if you're really choking for speed, but as I said before, it's not really worth it. Your code probably has opportunities for much greater gains at the algorithmic level.
Generally:
bob[x][y] ~= bob[x+y*stride]
(depends on the relative speeds of int multiplication and memory access, ie. mult is faster on high end CPUs and extra memory access probably faster on slower (pentium and below) chips)
Faster yet is:
bob[x+(y<<shift)]
if you're really choking for speed, but as I said before, it's not really worth it. Your code probably has opportunities for much greater gains at the algorithmic level.
For static arrays both access methos are as good.
For dynamic arrays the second is better. It takes less memory(x*pointers + x*y*size of type vs 1*pointer + x*y*size of type), the memoty is allocated in continuous block and it's easier to allocate and deallocate. The access time is also faster because the row size is constant. I personally always use the second if the array is width*height. The first is good for varying row size.
For dynamic arrays the second is better. It takes less memory(x*pointers + x*y*size of type vs 1*pointer + x*y*size of type), the memoty is allocated in continuous block and it's easier to allocate and deallocate. The access time is also faster because the row size is constant. I personally always use the second if the array is width*height. The first is good for varying row size.
I'd prefer the single dimension version and using pointer arithmetics to access data (linearly in one way or the other would be with addition instead of multiplication), if you truly have to use arrays like that. (don't know your platform nor your data access patterns, so hard to give proper advice)
That being said, optimize your real bottlenecks instead of this to begin with. Premature optimization is the root to much evil :p
That being said, optimize your real bottlenecks instead of this to begin with. Premature optimization is the root to much evil :p
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement