Members - Reputation: 120
Posted 06 March 2013 - 02:28 AM
Right now the way I am trying to do this is rendering the 3d model on an offscreen buffer using FBO , read pixels from it into an RGB array and sending it to a OpenCV window which is responsible for rendering the webcam video. The opencv window blends this pixel buffer and the video based on the deph of each pixel.
Is there another better way ? Can opengl window receive the webcam feed and and render it as a texture with some part of the texture occluded by the 3d model.
Another related question is , if i have to render a 3d model using opengl window, i can use glut or win32 window and render dirctly in to the window using opengl's draw APIs OR i can render in offscreen buffer, read the pixel data and render it as an image, say using opencv. Both do the job but what is the difference.
Crossbones+ - Reputation: 816
Posted 06 March 2013 - 04:02 PM
Yes! OpenGL can receive data from OpenCV that you can use to draw to a dynamic OpenGL texture! It just so happens that I'm doing something similar (using OpenGL and OpenCV in a project, but using dual windows on MacOSX). If you are storing your OpenCV image data in an IplImage structure, you can write IplImage::imageData to your texture (assuming you've created it already) using glTexSubImage2D. Keep in mind that the pixel format is BGR, so you want to use GL_BGR_EXT.
This should do the trick. If it's not what you need, sorry if I misinterpreted what you were asking for. Either way, let me know and I'll try to come up with a different solution if that's what you need.
Edited by blueshogun96, 06 March 2013 - 04:04 PM.
Follow Shogun3D on the official website: http://shogun3d.net