Public Group

2D array traversal

This topic is 3584 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Im just curious, but have little knowlage of the underlying mechanics of the machine, if traversal of a 2D Array in either a column or row direction has any difference in speed. If not, why not. And if so, which is fastest and why? For example, (i am learning C++ primarily): Given a 2D character array filled with random characters. If i parsed and made a comparison on each character to a reference, would there be any difference in doing it by column or by row (assuming its a square matrix). I understand they are no different conceptually, but mechanically they may be.

Share on other sites
I think arrays are saved in c++ in ram like this

{ {0, 1, 2}, {3, 4, 5}, {6, 7, 8} }

a[0] = {0, 1, 2}
a[1] = {3, 4, 5}

but i don't know, if it's faster or slower to go through them one way or the other. If there is a faster way it's possibly:
for( int x = 0; x < 3; x++ )	for( int y = 0; y < 3; y++ )		a[x][y] = i;

Just test both ways and look which is faster.

If you are familiar with pointers you can check the order using this code:

//create and fill 3x3 array with numbers from 0-8int a[3][3];int i = 0;for( int x = 0; x < 3; x++ ){	for( int y = 0; y < 3; y++ )	{		a[x][y] = i;		i++;	}}//create pointer integer pointer and point to that arrayint * m = (int *)&a;//go through array // pointers are fun^^for( int i = 0; i < 9; i++ )	std::cout << " " << m << " ";

hope that helps

Share on other sites
The language matters for the details here, so I will assume C or C++ (both of which operate the same as far as this example is concerned).

An array of some type T will be contiguous in memory; that is, there will be no gaps between T[n] and T[n+1]. As such, an array-of-array-of-T ("2D array") will also be contiguous. Conceptually, the array indexing mathematics work out to be "the same" regardless of whether you traverse the 2D array in "column" or "row" direction. For random access, the performance will be the same.

However, for ordered traversal of the array, there is a semantic and a performance difference, although the performance difference is generally quite ignorable unless you start talking about fairly large arrays. Consider a 2D array:
A B C D E F G HI J K LM N O P

In memory, that array is really just one big contiguous lump of data:
A B C D E F G H I J K L M N O P

If you iterate across the array in a fashion such that the row index changes more frequently than the column index, you get a result that is:
A E I M B F J N C G K O D H L P

But if you iterate the other way, so the column index changes more frequently than the row index, you get:
A B C D E F G H I J K L M N O P

This illustrates two points. First, the two iterations do not produce the same sequence; so if there is semantic meaning in how the data is organized or processed, then you can probably only select one of the two iteration options (or reorganize the data).

Second, the first iteration does not access elements one after another in memory. It hops around the chunk of memory in a pattern. This sort of access pattern is usually undesirable, because it's not as coherent; depending on the size of the data and the size of the cache on the chip, this kind of iteration can cause more cache misses (and thus impact performance negatively).

Promit's got a journal entry that talks about cache misses and locality of reference. He's using linked lists as an example, but it's still a good read.

Quote:
 Given a 2D character array filled with random characters. If i parsed and made a comparison on each character to a reference, would there be any difference in doing it by column or by row (assuming its a square matrix).

The comparison itself will not differ, but the iteration may as discussed above.

Quote:
 I understand they are no different conceptually, but mechanically they may be.

Again, as above, there may be a conceptual difference if order matters, but it sounds like it doesn't in this case.

Personally, I don't like to use n-d arrays (for n > 1) in C or C++, preferring instead simple linear arrays of the appropriate size. Especially when the array's not statically allocated, because in that case what I said about them being contiguous isn't true, since you have to dynamically allocate the inner bounds, and things get even nastier.

But when the array is statically allocated, you can see as per the example above that a[n][m] is the same as a[n * m] and you can index it as a grid with a bit of simple math.

Share on other sites
Quote:
 Original post by jpetrieEspecially when the array's not statically allocated, because in that case what I said about them being contiguous isn't true

Sure you can dynamically allocate contiguous multidimensional arrays:
int (*array)[20][30] = malloc(10 * 20 * 30 * sizeof(int));

Share on other sites
Quote:
 int (*array)[20][30] = malloc(10 * 20 * 30 * sizeof(int));

Doesn't work like that (shouldn't even compile as written), and also still requires you to know bounds at compile time. Replace 20 and 30 with X and Y and you're skunked and need to resort to the looping method.

Also only sane for those cases where malloc is sane, e.g., mostly C. Even assuming it worked.

Share on other sites

I had considered measuring the difference experimentally, but i wanted to consider the theory first and then test.

Quote:
 Original post by jpetrieIn memory, that array is really just one big contiguous lump of data:A B C D E F G H I J K L M N O PIf you iterate across the array in a fashion such that the row index changes more frequently than the column index, you get a result that is:A E I M B F J N C G K O D H L PBut if you iterate the other way, so the column index changes more frequently than the row index, you get:A B C D E F G H I J K L M N O PThis illustrates two points. First, the two iterations do not produce the same sequence; so if there is semantic meaning in how the data is organized or processed, then you can probably only select one of the two iteration options (or reorganize the data).

Yes, i understand the actual sequencing is different and this will have an effect on meaning. Sorry i dint clarify that when i indicated they were conceptually the same ( assumed this was implicit ).

Quote:
 Original post by jpetrieSecond, the first iteration does not access elements one after another in memory. It hops around the chunk of memory in a pattern. This sort of access pattern is usually undesirable, because it's not as coherent; depending on the size of the data and the size of the cache on the chip, this kind of iteration can cause more cache misses (and thus impact performance negatively).

This was my concern. Since in my excersise I am traversing a 2D array by column first, A,E,I,M.. etc.

The fundamental question was if it would be better suited to arranging the data such that i mirrored the basis, so that i was traversing a row first. Its not really a bit deal for me, other than i hate not knowing these things.

Quote:
 Personally, I don't like to use n-d arrays (for n > 1) in C or C++, preferring instead simple linear arrays of the appropriate size. Especially when the array's not statically allocated, because in that case what I said about them being contiguous isn't true, since you have to dynamically allocate the inner bounds, and things get even nastier.

Hmm this im not sure i fully understand. You mean you dont define 2D arrays explicitley as Array[x][y] but rather Array[x], Array[y]?
The intricacies of dynamic allocation and bounds are lost on me im afraid.

Share on other sites
Quote:
 Original post by jpetrieDoesn't work like that (shouldn't even compile as written)

It compiles and works perfectly as written. Tested with gcc 3.4.5, no compiler errors or warnings.
#include <stdio.h>#include <stdlib.h>int main(void){    int (*array)[20][30] = malloc(10 * 20 * 30 * sizeof(int));    int i, j, k;    for (i = 0; i < 10; ++i)        for (j = 0; j < 20; ++j)            for (k = 0; k < 30; ++k)                array[j][k] = 42;    free(array);    return 0;}

Quote:
 Original post by jpetrieAlso only sane for those cases where malloc is sane, e.g., mostly C.

Right. As Bjarne once said "Users of arrays are braindead" :)

Quote:
 Original post by jpetrieEven assuming it worked.

Which it does.

Share on other sites
Quote:
 Original post by Barking_MadThe fundamental question was if it would be better suited to arranging the data such [...] that i was traversing a row first.

Such traversion is more cache friendly, so depending on the size of the array, it might make a difference. How large is your array?

Share on other sites
I am using vectors in my project in general. I was just curious about the theoretical performance of arrays. AFAIK it wouldnt be much different for a vector wrt my question.

Other than the dynamic alocation behaviour of vectors, im not sure what guarentees there are about vectors being contiguous.

Feel free to enlighten me :)

Share on other sites
Quote:
 Original post by Barking_MadI was just curious about the theoretical performance of arrays.

Arrays are as fast as you can get. However, that does NOT mean that other solutions like vector or multi_array are slower.

From a usability perspective, arrays are just awful. As already pointed out, array dimensions have to be known at compile time. Furthermore, arrays are second class citizens, because
- You cannot assign arrays to other arrays
- You cannot pass arrays to functions
- You cannot return arrays from functions

Arrays are basically a compile time concept. An array exists only within the context where it was defined. You can't pass it around (if you try it, it decays to a pointer) - arrays are neither lvalues nor rvalues.

1. 1
Rutin
19
2. 2
3. 3
JoeJ
15
4. 4
5. 5

• 21
• 19
• 11
• 13
• 17
• Forum Statistics

• Total Topics
631697
• Total Posts
3001764
×