• Create Account

# C++ Array

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

3 replies to this topic

### #1McGrane  Members   -  Reputation: 1496

Like
0Likes
Like

Posted 26 February 2013 - 05:18 PM

Hi,

Im currently looking at motion detection, and im looking for the most efficient way to check a large array for change. What i was thinking is splitting the 1D array up using a quadtree. And then just doing a small check on the larger area, and more indepth calculations the further into the tree as it goes. My question here is this an efficent way to do it ? Or is there a better method to use?- is less better here, If im doing this check every few frames of a video, would it be better to just check one array of the next in a whole ?

Thanks !

 Sorry... i ment to post this is general programming :S - Can someone move it ?

Edited by McGrane, 26 February 2013 - 06:18 PM.

### #2alnite  Crossbones+   -  Reputation: 3049

Like
1Likes
Like

Posted 26 February 2013 - 07:59 PM

Get the SHA of the array is usually the preferred choice of detecting changes.  You just compare the SHA of the array before and after.  If there's even a tiny bit of alteration, the SHA would completely change to something else.  It does not tell you, however, where the changes are.

I don't have a first-hand experience of using SHA in C++.  It seems some library are better suited than another depending on the type of your application (commercial/free).  This library, Crypto++ seems to be a good candidate.

Edited by alnite, 26 February 2013 - 08:01 PM.

### #3Cornstalks  Crossbones+   -  Reputation: 7022

Like
1Likes
Like

Posted 26 February 2013 - 08:38 PM

Get the SHA of the array is usually the preferred choice of detecting changes.  You just compare the SHA of the array before and after.  If there's even a tiny bit of alteration, the SHA would completely change to something else.  It does not tell you, however, where the changes are.

In theory that works, but in reality video compression algorithms introduce (slight) noise, so 1 second of video (even if nothing in the actual scene changes) can result in slightly different numbers, so the SHA will be different, suggesting motion in the scene when it's really just compression artifacts. On top of that, imaging sensors aren't perfect and won't give the exact same reading from frame to frame, and the slight variations will again result in a different SHA. The SHA is more complicated, more computationally expensive, and almost useless because it will detect insignificant changes (due to image sensors and compression artifacts). SHA is great for detecting changes in data sets (including single-bit changes), but for motion detection it's too good.

@OP: Assuming you want to ignore "insignificant" changes to the scene (for example, the noise I mentioned introduced by video compression algorithms), you've got to have a method that's looking for "big" changes (and ignores little changes). I don't see the point of the quadtree, unless you're trying to find out which quad the change occurred in. You have to check the full image every frame, so a quadtree won't save you any cycles. Yeah, you could just do some simple matrix norm to check for any change, and then perform more complex analysis on the "interesting" (i.e. changed) parts of the image (using your quadtree), but you probably don't want to use a quadtree because it can cut "features" (i.e. interesting regions, like changed areas) in half (for example, a feature in the middle of the image is split between all four quads!); I'm skeptical that a quadtree would save any computation, because you have to come up with a clever way of making sure you don't cut image features in half, which requires extra computation.

Seriously, for something like this, most people use OpenCV. It's a bit of a beast to work with. If you just want motion detection (i.e. something in this image is different than what I was seeing in the past), then OpenCV is overkill. If you want motion tracking, then it's quite a (big) tricky problem and you'll hate life if you don't use something like OpenCV.

What I would do for simple motion detection is this: Start with the first frame and save it as your "reference" frame. Then, for all subsequent frames, take the matrix norm (probably just a 1-norm, also called the Manhattan norm or Taxi norm) of the difference of the current frame from the reference frame. If this result is larger than some threshold value, then you've found a change in the scene (and something has moved, or otherwise changed). Once you've found a frame with motion, you reset your reference frame to be that frame. Something like this (pseudocode):

R = getFrame(); // R is your reference frame

while (true)
{
C = getFrame(); // Your current frame

D = R - C; // The "difference" frame... just subtract the two images

norm = 0; // Now calculate the 1-norm of D
for (int i = 0; i < D.getRows(); ++i)
for (int j = 0; j < D.getColumns(); ++j)
norm += absolute_value(D[i, j]);
// One option for the above line is to only add to norm if absolute_value(D[i, j]) > someMinimumDelta
// That way, little tiny changes don't add to the norm; only more siginificant changes to pixels add to the norm

if (norm > threshold)
{
// Something has changed!
// So yeah... what you do is up to you when the scene changes
R = C; // Reset the reference frame to the current frame
}
}


Note that this doesn't really work with changes in lighting (that is, it will detect "motion" changes if the lighting changes, even if nothing technically moved). Handling lighting changes can be a pain in the butt (because different shadows are being cast, plus it can be hard to "normalize" your images to all be the same brightness regardless of actual lighting conditions).

Other options include calculating the SSIM between your current frame and reference frame, and if it's not close enough to 1 than you can say there's a difference, but this method may not work super well for small changes in only one spot of an image.

Edited by Cornstalks, 26 February 2013 - 09:00 PM.

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

### #4McGrane  Members   -  Reputation: 1496

Like
0Likes
Like

Posted 27 February 2013 - 11:49 AM

Thanks a bunch guys ! I only logged in for a quick sec and will have a more in depth look later on when i get a chance, but from what i see theres some good reading here for me . As far as using OpenCV, im trying to avoid it as much as possible, as im trying to learn how these work myself - Altough i am using it already for displaying from the webcam. As far as noise goes in the scene i am already converting the image to line detected shapes. So there isint too much noise as the scene is mostly black and white. But there still is bit. I was originaly thinking of setting up a quadtree for the refrence image, and then checking on sections at a time for motion - But i tought it might be wasted time. And from reading these comments, im glad i stopped to ask !

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS