Get the SHA of the array is usually the preferred choice of detecting changes. You just compare the SHA of the array before and after. If there's even a tiny bit of alteration, the SHA would completely change to something else. It does not tell you, however, where the changes are.
In theory that works, but in reality video compression algorithms introduce (slight) noise, so 1 second of video (even if nothing in the actual scene changes) can result in slightly different numbers, so the SHA will be different, suggesting motion in the scene when it's really just compression artifacts. On top of that, imaging sensors aren't perfect and won't give the exact same reading from frame to frame, and the slight variations will again result in a different SHA. The SHA is more complicated, more computationally expensive, and almost useless because it will detect insignificant changes (due to image sensors and compression artifacts). SHA is great for detecting changes in data sets (including single-bit changes), but for motion detection it's too good.
@OP: Assuming you want to ignore "insignificant" changes to the scene (for example, the noise I mentioned introduced by video compression algorithms), you've got to have a method that's looking for "big" changes (and ignores little changes). I don't see the point of the quadtree, unless you're trying to find out which quad the change occurred in. You have to check the full image every frame, so a quadtree won't save you any cycles. Yeah, you could just do some simple matrix norm to check for any change, and then perform more complex analysis on the "interesting" (i.e. changed) parts of the image (using your quadtree), but you probably don't want to use a quadtree because it can cut "features" (i.e. interesting regions, like changed areas) in half (for example, a feature in the middle of the image is split between all four quads!); I'm skeptical that a quadtree would save any computation, because you have to come up with a clever way of making sure you don't cut image features in half, which requires extra computation.
Seriously, for something like this, most people use OpenCV. It's a bit of a beast to work with. If you just want motion detection (i.e. something in this image is different than what I was seeing in the past), then OpenCV is overkill. If you want motion tracking, then it's quite a (big) tricky problem and you'll hate life if you don't use something like OpenCV.
What I would do for simple motion detection is this: Start with the first frame and save it as your "reference" frame. Then, for all subsequent frames, take the matrix norm (probably just a 1-norm, also called the Manhattan norm or Taxi norm) of the difference of the current frame from the reference frame. If this result is larger than some threshold value, then you've found a change in the scene (and something has moved, or otherwise changed). Once you've found a frame with motion, you reset your reference frame to be that frame. Something like this (pseudocode):
R = getFrame(); // R is your reference frame
C = getFrame(); // Your current frame
D = R - C; // The "difference" frame... just subtract the two images
norm = 0; // Now calculate the 1-norm of D
for (int i = 0; i < D.getRows(); ++i)
for (int j = 0; j < D.getColumns(); ++j)
norm += absolute_value(D[i, j]);
// One option for the above line is to only add to norm if absolute_value(D[i, j]) > someMinimumDelta
// That way, little tiny changes don't add to the norm; only more siginificant changes to pixels add to the norm
if (norm > threshold)
// Something has changed!
// So yeah... what you do is up to you when the scene changes
R = C; // Reset the reference frame to the current frame
Note that this doesn't really work with changes in lighting (that is, it will detect "motion" changes if the lighting changes, even if nothing technically moved). Handling lighting changes can be a pain in the butt (because different shadows are being cast, plus it can be hard to "normalize" your images to all be the same brightness regardless of actual lighting conditions).
Other options include calculating the SSIM between your current frame and reference frame, and if it's not close enough to 1 than you can say there's a difference, but this method may not work super well for small changes in only one spot of an image.
Edited by Cornstalks, 26 February 2013 - 09:00 PM.