# [.net] C# arrays: multidimensional versus lexicographic

This topic is 3699 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello, Lately I've been doing some processor intensive physical modeling in C#. Basically, my simulation code needs to store a two-dimensional array. This can be implemented in two ways: 1.) Use true multidimensional arrays, created as double[,] grid = new double[width,height], and indexed as grid[x,y]. 2.) Use unidimensional arrays with lexicographic indexing, created as double[] grid = new double[width*height] and indexed as grid[y*width+x]. Now, according to my profilings (System.Diagnostics.Stopwatch), unidimensional arrays make the code 2-4 times faster than multidimensional arrays! Is it common knowledge that multidimensional arrays are slower than unidimensional ones? How are multidimensional arrays implemented internally? I was quite shocked by the speed difference. Thanks a lot, - Mikko P.S. Yes, I've checked that both types of arrays produce the same results (i.e. my simulator isn't buggy). Also, the profiling is repeated a number of times (to avoid program startup bias etc.), and the random variation is negligible.

##### Share on other sites
Don't forget staggered arrays:

double[][] grid = new double[height][];for(int i = 0; i < height; i++)    grid = new double[width];

I have read in the msdn docs somewhere that staggered arrays perform better than multidimensional arrays, so I think it is known that multidimensional arrays perform slower than alternatives. I don't recall ever reading any comparison of staggered arrays with single dimensioned arrays. You should try that and see how it compares.

##### Share on other sites
Memory access patterns performance, processor cache behavior etc. are rather deep subjects -- especially with virtual machines. In this case, as you didn't provide the code you are measuring, I assume you are indexing your multidimensional array intelligently (like row-major access) and aren't also doing some unnecessary boxing operations in the loop putting unnecessary pressure to the memory system. So, just a few general points concerning C# efficiency.

If your are writing something along this
for(int index = 0; index < a.Length; index++)

the JIT compiler checks the validity of the array bounds just once before the loop with a statement approximately like this
if((Length - 1) <= a.GetUpperBound(0))

if the array is one-dimensional and zero-based.

No such luck with multidimensional, jagged ("staggered arrays" in what kanato wrote) or non-zero based arrays: the upper-bound gets checked every time with some additional checks related to the lower bound of the array. For single-dimensional zero-based arrays there are also some special instructions like ldelem, ldelema, ldlen, newarr and stelem that allow the compiler to emit more optimised code.

If performance is a concern, a single dimensional array is the way to go, multidimensional array being the slowest. You could also consider using unsafe code, which will bring the observable wall clock time further down.

##### Share on other sites
Thank you for your kind replies, gentlemen!

Changing the order indexing for multidimensional
arrays from grid[x,y] to grid[y,x]
boosted up the code a lot, probably due to
better cache coherence.

I also tried out jagged arrays, and
in fact, they were even faster than
the unidimensional lexicographic arrays! @_@
I was quite surprised by this, I thought
that the "double indirection" required
by jagged arrays was highly detrimental to efficiency.

Anyways, thanks guys.

- Mikko