I''ll take one more shot since I have it very clear now what made our views different.
quote:Original post by Jan Wassenberg
In theory, you are correct about (n -> inf) => cache is a constant factor, but that''s entirely worthless in practice (lacking infinite mem)
Not at all worthless! The cache is still a lot smaller than actual memory, and in applications where the matrices don''t fit in the cache, the speed of a cache-lookup is essentially O(1), and indeed *in practice*. No need for infinite memory to notice this. Processing an image or a heightmap with the size of a few megabytes, for example, would be enough to hit the cache''s upper lookup time and so making it a constant factor. This holds to what you said earlier as well. We can''t hit infinity with any real memory, but we''ll reach the asymptotic behaviour soon enough, as caches are so small when compared to conventional memory. This is what matters IMO. (and this is not nitpicking!)
But yeah, I see how it could be useful to have say the algorithm may behave like O(n^3) if the n is small enough. The notation is still helpful since it says that e.g. when n<10, the ''real'' complexity function may be more than than C*n^3 but as n rises to 10<n<1000, we get the asymptotic behaviour of O(n^3). And once we break the n=1000 barrier and upwards, cache gets small enough and a better approximation can be got by using O(n^2). Maybe I was too eager to think that the n should be thought to be very big (but within realistic bounds of conventional memory), when O(n^2) in the original case would hold with cache or not. But if we want better approximations for *small* n, then we may indeed get O(n^3) when n<1000.
This is precisely where our differences arose. Maybe you only deal with small matrices and arrays, so in your field (or what you say "in practice"
the estimate O(n^3) was correct (best approximation as long as n<1000). But for what I had assumed n to be, e.g. a bitmap image, O(n^2) was the correct limit (best approximation as long as 1000<n<100000).