Cache friendlier way to access 2D array?

Started by
7 comments, last by ChrisBarnhouse 11 years, 1 month ago
No specific need for such a thing, just asking out of interest because i could not find anything using google.

A normal array is not very cache friendly if iterating through the different "layers" of the array (not the direction of the array elements in memory)

The access is done using x*XSize + y

Does there exist an access "equation" which has the elements spatially close to each other in 2d space likely to also be close to each other in memory, regardless of axis?

Id imagibe it would work like a quadtree without the upper levels (built in depth first order)

Assuming we use quadtree like indexing, would it work for non-POT grid sizes?

o3o

Advertisement

This is commonly done in graphics hardware for textures, where it's called tiling or swizzling. You could apply the same process to arrays.

http://fgiesen.wordpress.com/2011/01/17/texture-tiling-and-swizzling/ explains the process well.

Are you thinking of something like a Hilbert curve?

In addition to the texture tiling link above, you MUST understand your own usage patterns before attempting to optimize for the cache.

The tiling pattern assumes you are operating as a kernel scanning across an image. That is very often true for certain post-processing effects.

Your goal is to have roughly linear data access along the array. If your algorithms are linear across the array than you don't need to do anything special. If your algorithms require scanning several lines at once, such as the 4 lines mentioned in the article, then it would make more sense to have them tiled as it describes.

If your access pattern is something else entirely, you will want a layout that mimics whatever access pattern you are using.

Everything stated so far is of course correct but misses an obvious item. Frob makes the proper suggestion which is that access pattern is key and all the swizzling in the world won't make a difference if the access is out of expectations. More importantly, a "bug" in the given equation needs to be brought up: "x*XSize+y". That's likely a typo. Should be: "x+y*XSize". Ok, why is that important? As Frob pointed out, your access pattern is critical and the initial, incorrect?, equation would need to have a memory layout biased in vertical bars which could actually be mostly solved by storing the image rotated 90 degree's and exchanging x/y without any fancy swizzles.

Sorry folks, I just figured it's proper to ask if the standard strided image format formula was misstyped before suggesting complicated solutions. :)

Assuming the equation was a typo, I disagree with the original hypothesis that "iteration" is inefficient. Again, as Frob mentions, this depends on access patterns. Simple iterations over the pixels, you will likely slow things down using other packing methods. Performing bi-linear operations are also likely to slow down with alternative layouts using AMD/Intel CPU's due to the number of lines of associativity they maintain in the cache. I.e. they maintain all three lines of data in separate cache lines without purging and reloading them constantly and each cache line is likely to contain at least say 4 pixels (likely more unless you are on a REALLY dated CPU) so the predictor is busy loading the next section from all three memory areas while the CPU is actively processing the loaded data.

Getting back on track. As Frob points out, it's all about what you want to do with the data. If you wanted to perform a gaussian blur on the image you might think swizzling is the way to go. You'd be wrong in that case. Execute the gaussian blur only in left to right so you get linear memory access on multiple lines, rotate the original image 90 degree's in memory, apply the blur again, unrotate and then average the images. Mathematically identical results, baring rounding errors. But MUCH faster than accessing memory beyond cache line association limits. (NOTE: Think that's the correct fast method, been a while.. :) And there are more tricks to reduce memory access top to bottom to keep it cache efficient just about no matter what filter size you want.)

So, accessing arrays/memory is all about how you need to access it and what you intend to do with it. I use the gaussian blur because it is a big "window" of access, but also you can break the math into several stages and that's what allows you to optimize your memory access. Truly random access, forget it, linear is best. :)

Yeah it was supoosed to be x*Ysize + y

Basically i was wondefing if there is another array indexing equation which doesnt favor either axis.

I was thinking an access pattern where you might pick a cell and then need to access the cells near it, in any direction. Not something like image processing.

I guess simple tiling into chunks of some.size is the simplest way and also effective...

o3o

Z-order curve are a common solution.

Are you thinking of something like a Hilbert curve?

You actually could use a Hilbert curve for a layout of a 2D array and it would have the property that (x,y)'s that are near each other in the Euclidean sense would tend to be near each other in the array; there would be exceptions but generally this would be the case.

Would only work with power of two sized arrays however. Also not sure about the time efficiency of going from index -> (x,y) and (x,y)->index; might be log(n) over the size of the array but not sure.

Wait till towards the end of your project, profile your code, then determine if the array access is even a bottleneck worth optimizing.

This topic is closed to new replies.

Advertisement