Has this ever been done? Image processing...

Started by
13 comments, last by ibebrett 14 years, 10 months ago
Quote:Original post by cdoty
The first (reference) image needs to be an image without anything in it, (ie a blank wall). This way you always see the motion of the person.

Also, using the blank wall, you need to compare a series of images to find the noise level of the camera, and apply this value to ignore noise 'spikes'. Everything about the noise spike is probably a pixel of a moving object.

The change in light is a tough issue that you will have to deal with. For example, if the wall is near a window, the position of the sun could mess up the capture, as it's position changes, compared to the reference image. Same goes for lights turned on/off in the room.
Apple's iChat uses this technique to place custom backgrounds in video chats. It works fairly well, as long as your camera is solidly fixed in place (even gentle vibrations will mess up the image matching), and the lighting doesn't change. However, video chats tend to be short enough to side-step the light problem.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement
Great replies, very inspiring, thanks.
It is a tough issue, mainly due to the noise problems that have been mentioned. Another one is the fact that the camera may readjust when someone comes closer, which changes the background. I doubt that a simple comparison of pixels between the live feed and a static image would give good results. Even when scaling and applying blur type filters because one would then lose detail and the silhouette would include parts of the background.
I will be checking out the iChat and XBox game which should shed some light on the current technological status, although a research paper may be more informative. I suppose there must be a reason why the film industry uses one colored screens, but the question is how much of a quality difference there is, movies want to get out the last bit of detail.
Quote:Original post by CProgrammer
I suppose there must be a reason why the film industry uses one colored screens, but the question is how much of a quality difference there is, movies want to get out the last bit of detail.
One reason may be legacy - the film/TV industry has been using bluescreen techniques just about forever.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Hello, just a 'reverse view' on the matter.
In singling out an object/actor in an sequence of images, one can assume that the focus is on the object/actor at all times. Which makes him/her/it more noiseless/more constant than the background.
Like metioned, the background may become different due to focussing, but this focussing is intended to keep the object constant, which can be used.
Depending on how smooth you want the edges to be. this is basically an edge detection problem, with some added information in the form of multiple frames.
You'll most probably end up with some edge detection algortihm assisted by proper filters (simply blurring eg), treshhold-values etc.
This on evey frame seperatly + a evaluator to determine the equality of the fit between frames.
maybe you can try this as a reference
http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-801Fall-2004/CourseHome/index.htm

This topic is closed to new replies.

Advertisement