Archived

This topic is now archived and is closed to further replies.

Dovyman

autonomous robot operation using a camera

Recommended Posts

Lets say that I took the picture from a camera on the robot and stored the pixels in an array. For ease of processing, I could probably eliminate some pixels, possibly even every other pixel and still get a rough picture. Then, having this, I could probably take those RGB values and simply set default colors to them, for example if it was within a certain RGB range, it might be just labelled "gray" or "yellow". So now we have an array of color values not using more than 16 colors. Assuming I know that there are certain objects I want to look for as landmarks, in my case, a large yellow ball, and a gray vertical pipe, any ideas on how to treat those groupings of pixels as one entity and get its position within the picture? One problem is that because the distance between the robot and the object would enhance or minimize the size of the object in the picture, and the other is there is limited processing power available, so I''m looking for a simple solution. Perhaps some kind of a "template" for the objects? i.e. treat groups of adjacent same colored pixels as an entity, and then determine what it is based on preset characteristics? (kind of like a pattern recognition neural net, except less computationally intense?)

Share this post


Link to post
Share on other sites
If you knew how far away the object was from the camera, you could use that to compensate for the smaller picture.

i would say use a ANN, as it wouldn''t have to be huge in order to solve this problem, say 3 layers?

either that or try ti figure out things which stay the same size, then try and use a databace search, to try to figure out what it is.

Hopefully this was of help,
Nice coder

Humans are Human oriented, it is because of there nature: a design flaw-greed, jelosy the solution: AI- never greedy, and they stick to there ethics no matter what.

Share this post


Link to post
Share on other sites
Okay the use of a neural net would probably be the last route I would pursue for this problem..but if you can do it, go for it..

If all your trying to do is get your "rover" to drive towards things that are yellow and for the sake of argument, avoid things that are grey,

Implementation:

Assuming you have a 2D array (for ease of explaining)and Simple Control commands (e.g. MoveForward, MoveBackward, TurnLeft, TurnRight)

1) Create a function that takes array/image and finds a cluster of yellow pixels. This function should return a vector or even fuzzy values. The 2Dvector value would simply have values in it that would be effectively X = 1 Y = 0.5 (assuming they are from -1 to 1)

if x is positive then go right
if x is negative then go left
if y is negative then go forward alot
if y is positive then go forward alittle
(these are just for demonstration

Fuzzy values, would just be less precise, showing the reference of location (e.g. I am left of the yellow spot, or right of, or there is no yellow spot)

2) Tie those values into the controller code, and you should hopefully have something attempting to drive towards yellow..

3) Another thing you could do to "judge distance" would be either to go stereoscopic (two cameras positioned kinda like our eyes)

OR

a simpler rudementary method is detect how large the yellow cluster is.

If the yellow cluster is more than 75% of the image, then you''re there, if it is less than 25% keep driving towards the yello until it is 75%.

To address the Neural Network implementation,
you would need lots of test data to train that network to detect spots and their relationship to the center of the image, to get even mild results..(IMHO).

it can be done, but would probably take alot more time, and you would get bored it, before you are done.

in short, save the NN implementation till version 2 it is probably way to much to care about at this time.

-Lucas

Share this post


Link to post
Share on other sites
The LAST thing you would want to do is to use a neural network-- I say this because you can solve this problem using a few simple tricks. Just a word of caution though-- image processing is SLOW. Even segmenting the screen in to regions of similar colour will be slow. That said......

Your first problem is figuring out what you can ignore, and what you can''t. You will want to do some form of image segmentation. There are MANY different algorithms, some work amazingly well, others work amazingly fast. lol. Google for Image Segmentation and you''ll find more info that you ever need.

The next problem you have is one of finding the object in the segmented data. Since you seem to be dealing with colour, you''ve got one problem partially solved. Any segmented region that''s mean ( or median, whatever) color is not within a certain range of colors can be ignored.

Now that you''re left only with colour matched regions it is time to do another really SIMPLE check-- orientation. If you''re looking for a ball, a box containing the balls region should be almost square. For the pipe, it should be VERY elongated. See what I''m getting at? Simple, fast calculations to narrow down your candidates.

Personally, I would stop here, but, there are pattern matching algorithms out there that would further refine your search. Without getting in to really complicated shapes, Google for the Hough Image Transform and you''ll get some links to straight line detection. You can easily extend this to work with circles, squares, rectangles, etc....

You''re going to run in to problems with ''noisy'' images, so you might want to consider ''regions'' that have been persistant over n frames (where n is some number you decide).

I think if you''e solve this problem, centering will not be an issue.

Best of luck,
Will



Share this post


Link to post
Share on other sites