Recommended Posts

It's a very large, expansive library with many different pieces for many different things. IME the best thing to do is have a specific project in mind and pursue the information needed to do the specific tasks required to solve that project. You can't just learn OpenCV, as such, because it's too big and none of it really makes any sense without the context of trying to accomplish something.

Share this post


Link to post
Share on other sites

OpenCV has beside a C++ API, a Python API as well. The latter can be more suitable for quick prototyping.

You probably want to search for edge detection utilities first.

As a side note, the domain of computer vision can be quite "messy" since there are lots of different approaches to the same problem, ranging from ad hoc to rather sophisticated, ranging between real-time and offline performance. So do not stick to one fixed approach in advance.

Edited by matt77hias

Share this post


Link to post
Share on other sites

As far as books go, the O'Reilly's "Learning OpenCV 3.0" was a really good overview of the library. I read through the entire thing, they do a great job explaining the library, getting you set up and getting into it, and explaining the different parts of the library and overview how they work. They also do a really good job at getting you familiar with some of the techniques used in computer vision/image processing, such as feature detectors and descriptors.

If your new to OpenCV, you won't want to read straight through the entire book, you'll just waste time. Read through the first couple chapters to get how the library works, then sort of pick and choose the chapters that are relevant to what your trying to do. Definitely worth purchasing if you want to learn OpenCV.

Share this post


Link to post
Share on other sites

Thanks all. I am getting Learning OpenCV 3.0 as a gift. :)

Ok what I want is to do an edge detection, then convert all gray pixels to white or black, depending- on the darkness of the gray.

Then I want to do is draw a white bitmap to the entire screen, then draw the black edge pixels then do a fill that doesn’t permeate the black.

I should end up with a fill, and some white regions where the obstacles are.

Is it possible to do this in OpenCV? I know that it does edge detection.

So far I found this: https://www.learnopencv.com/filling-holes-in-an-image-using-opencv-python-c/

Edited by sjhalayka

Share this post


Link to post
Share on other sites

You may need to use a couple different techniques to get what you're after. But I think what your looking for is connected components. the first thing you'll do is load your image and convert it into a grayscale image using imread and cvtColor. then you use adaptive thresholding to turn the image to black and white, or black white and grey, using the function adaptiveThreshold. Then you can use the function findContours to find each "connected component" [another link]. findContours works best when your image has been thresholded, which is why i mentioned adaptive thresholding above. You can also do canny edge detection, but I've personally had better results with adaptive thresholding before findContours, although the actual results will depend a lot on what is actually in your images. You'll then be able to either draw the outlines of the contours or fill them in with either drawContours or fillPoly.

You'll probably find at some point you get better results by pre processing your image even more, for example either Eroding and Dilating

Anyway, that should get you started while you wait for your book

 

Share this post


Link to post
Share on other sites

The code is posted at:

https://github.com/sjhalayka/obstacle_detection/blob/master/opencvtest.cpp

... it uses the AVI reading code from the book Learning OpenCV 3.

Some test AVIs are included in the repository at:

https://github.com/sjhalayka/obstacle_detection

Thanks for the insight.

Edited by sjhalayka

Share this post


Link to post
Share on other sites

I updated the code to detect the edges in the H channel from an HSV version of the input image. This works better than the original code, which detected the edges in the grayscale version of the input image. Edges can also be detected in the S and V channels; no point in throwing away good input data; the V channel is like the grayscale version of the input image.

Looking into more complicated obstacle detectors; machine learning (already found code to do the XOR operation using a feed-forward back-propagation artificial neural network -- the equivalent of Hello World LOL).

Edited by sjhalayka

Share this post


Link to post
Share on other sites

Does anyone have any experience with the bioinspired contrib module? I am trying to compile an example and it gives me the error 'createRetina' is not a member of cv::bioinspired.

Any idea? I can't seem to find it defined in any of the .hpp files.

Share this post


Link to post
Share on other sites

I found out how to create the retina object using cv::bioinspired::Retina::create(), but now I get the following error:

fatal error C1189: #error: this is a private header which should not be used from outside of the OpenCV library

Edited by sjhalayka

Share this post


Link to post
Share on other sites

It turns out that, yes, I had to comment out the #include "private.hpp" line in the precomp.h that came with the module code. So, basically, the opencv_contrib bioinspired module has a bug in it?

Edited by sjhalayka

Share this post


Link to post
Share on other sites

Rather than build OpenNI into OpenCV, I installed the Kinect SDK v2.0 onto a Windows 10 machine.

I'm still waiting for my $25 Xbox 360 Kinect, but here is some C++ code that I wrote in anticipation:

https://github.com/sjhalayka/kinect_opencv

And here's some C++ tutorials that I found:

https://homes.cs.washington.edu/~edzhang/tutorials/index.html

https://github.com/UnaNancyOwen/Kinect2Sample

Edited by sjhalayka

Share this post


Link to post
Share on other sites

I've used the kinect sensor for a few different things. If your trying to use it as a webcam though, i can tell you to not do that. You'll have to set variables in the system registry, install a few different things that seem to be basically hacked together, and then cross your fingers it works (as a webcam, to get the input stream). Then again that was a couple years ago since i've touched the kinect, so maybe its easier to use without the kinect sdk now. Microsoft discontinued the kinect anyway because it was more of a novelty, and honestly in my experience using a regular webcam can get you the same results, often times even better when you expect a lot of sunlight in the environment. The kinect is easy to use though, which is why it's probably appealing. i'd recommend don't go with the kinect. the intel realsense is pretty much just as good, smaller, and i believe is still not discontinued. The ir sensors (depth) work a couple feet further away on the kinect, otherwise the realsense is better in my opinion.

Share this post


Link to post
Share on other sites

the depth sensor is actually using the infrared sensor, which is why depth in these types of cameras don't work at all outside because of all the ir the sun is emitting. if you want good depth, i'd suggest buying a stereo camera, or making your own. The kinect is at least fun to play with though, all the hard work of detecting people is done for you

Share this post


Link to post
Share on other sites

Hmm, I never thought about the sun drowning out the signal. See, I'm wondering if something like the Kinect is useful for obstacle detection, in the day time and night time. Perhaps a stereo camera is indeed best. Thanks!

Do you have code to calculate the depth map from the stereo disparity map? The book Learning OpenCV covers it in chapter 19, but I'm just not able to piece it all together.

Edited by sjhalayka

Share this post


Link to post
Share on other sites

i can totally help you with depth from a disparity image. i did a thing trying to get a job over at verizon: 

 

there's a link to the github for the code somewhere in the description. I only had a couple days to do this so it's not perfect, I wish i would have added in something like marching cubes to turn the point cloud into a mesh. maybe remove some of the noise as well from the point cloud

Share this post


Link to post
Share on other sites

Is it true that depth is inversely proportional to disparity? If so, that's cool. If not, then I'm reading the wrong materials... https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html

But how do I get high quality depth maps? The ones I've seen on the web all look like s**t. Ditto for the depth map given by the code listed in the previous post.

Edited by sjhalayka

Share this post


Link to post
Share on other sites

I did a search for 'depth' in your code (https://github.com/cloudis31/verizon-disparity) and couldn't find anything related to calculating a depth map... can you please point me in the right direction? Thank you.

Do you have a Marching Cubes implementation handy? If not, I have one somewhere.

Edited by sjhalayka

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now