It doesn't work like that. You can't sort a single array simultaneously by several dimensions of a continuous space. If you use my suggestion and just depth sort the array direction, you only have to brute force search objects with similar depth. Let's say you want to find objects within radius 1.2 from coordinates [5.5 2.1]. This gives us depth limits of 2.1 - 1.2 = 0.9 and 2.1 + 1.2 = 3.3. Then you binary search the array for the indices at 0.9 and 3.3 depth, which is practically free. Finally you brute force search only between those indices. If your map is 50x50 and objects are distributed evenly in the depth direction, only about 5% of them have to be searched in this case. Most of those 5% will be inspected very cheaply because their horizontal distance falls outside the radius.Method 2 makes it easy to find all objects in a given map. But if I wanted to do something like .. find all objects from 2,2 to 3,3, I would need to do a brute force search through my object structure to find them all (although if they were sorted, I could easily stop once I reached 3,4).
You haven't even defined a problem, other than the overlap one which was trivially solved by depth sort. It's pointless to talk about acceleration data structures, much less implement bad ones, before you have defined what operations you need them to accelerate. For instance, how many objects will there be at most, and is anything known about how they will be distributed spatially? How many "adjacent objects" queries will be made per tick at most? How many other objects will fall under the average radius of such a query? There is no universal data structure that is good for everything.Using method 3 helps alleviate this. I agree that it is messy though, but I feel like the solution still *has* to be to index the objects by location.
If you just want to dig into the technology that exists for this purpose, read about BSP trees, quadtrees or other serious acceleration structures and implement one of them.