Jump to content
  • Advertisement
Sign in to follow this  
SeveQ

Need to calculate absolute object coordinates from some relative info

This topic is 2587 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there,

I urgently need help calculating the position of several objects on a two dimensional playing field. For this purpose I'm using a Microsoft Kinect and the OpenCV library. The camera of course sees a perspective projection of the playing field (a red and blue checkerbord) with vanishing points depending on its angle.

known variable and fixed data about the camera:
- camera position relative to the origin of the final coordinate plane (so that'll be the absolute final coordinates of the cam)
- camera angle; pitch (a fixed value, though this calculation depends on it) as well as yaw relative to the x-axis of the final plane
- camera height above the playing field
- dimensions of the captured 2D video frame
- FOV angles and focal length of the cam (both fixed values)

known variable information about the detected objects
- x- and y-coordinates in the captured 2D frame of the camera (position of the centroid of a detected object on the picture)
- distance from the captured frame (and not from the camera lens; this information is already converted to the correct distance; the distance vector is perpendicular to the video frame; Kinect apparently does this automatically)

I have to mesh these information up so that eventually I'll receive the absolute coordinates of each detected object.

This should somehow be possible but I don't have the slightest clue how to do that. Apparently there's only one last vector missing. The one from the camera position to the object position. This vector is itself a sum of some other vectors from which nearly all are known but one: the straightened horizontal distance of the object from the center of the 2D camera frame. The point where the distance vector intersects with the camera frame.

Attached is an illustration of what I need to do. Green things are known, the red point is what I have to calc from any necessary given data mentioned above to be able to eventually calculate the position of the big yellow blob to the origin of the blue coordinate plane.

If there are any questions or any further information that I might need to answer/give for you to be able to help me solve this problem, don't hesitate to ask.

Any kind of help would be greatly appreciated. This problem has already eaten up much time without any progress. I really need to finally solve it.

Thanks a lot!

Hendrik

Share this post


Link to post
Share on other sites
Advertisement
I don't have much time to read carefully what you wrote but can't you use the[font="Arial"] equation y = mx + b to determine the [/font][font="Arial"]straight-line that intercepts the center of the object and the plane (captured frame)? (using the obj distance to determine the m and using the center of the object as the substitution point[/font][font="Arial"] (x,y) to get b[/font][font="Arial"])? Then you just needed to solve the equation: straight-line = captured_frame_equation and get the red point.[/font]
[font="Arial"] [/font]
[font="Arial"]I hope my tip helps[/font]

Share this post


Link to post
Share on other sites
Alright, I hope that's not due to my bad English (is it bad? I don't know...)

Well, yepp, I could. But I haven't found a way yet to determine the angles of this line. It could go anywhere as long as it crosses the objects centroid. It would require knowledge of the position of the according vanishing point.

Regarding the vanishing point respectively its coordinates on the frame: are they fixed? I mean, if I calculate them once by using a magic mysterious calibration sequence, can I reuse them in any other situation too (given that pitch and height of the camera remain fixed)? If I can, I'll do the calibration by...

1. adjust the kinect position to a known point on the playing field with its vertical center line (320:0, 320:480) exactly positioned alongside a separation line of some fields of the checkerboard.
2. place four detectable objects symmetrically to the left and the right of the center line and on the crossing points of the checkerboard
3. by means of theorem of intercepting lines calculate the intercept point of the elongated line between either the two objects left of the center line or the two right of it with the elongated line of the remaining two calibation objects.

That point should be the single vanishing point that I could utilize to calculate every intersection point with the frame, right? And is it constant depending on a fixed pitch and height of the cam (or even independent from them)?


//update 05/05/2011: Alright, I think I've got it. Using four detected objects symmetrically arranged left and right of the vertical center of the image (or at least using two objects right of the center) I create a right-angled triangle that I can use to calc the intersection of the hypotenuse with the vertical center of the frame. That's where I assume the vanishing point.

Next, I create a straight line between the vanishing point and a detected object's centroid. This straight line is elongated till its intersection with the bottom border of the frame. To get the correct point on the given black 'frame from top'-line in the attached picture, I need to elongate the straight line even further. The additional length is determined from the Kinect's pitch, height and FOV and some magical trigonometric calculations.

The point where the straight line and the virtual frame intersect is the red point in the image.

What I furthermore could make great use of would be a way to adjust the once calculated and henceforth fixed vanishing point by means of the 'down' vector. This vector can be determined by the Kinect's acceleration sensor. I already fetch the tilt/pitch of the cam from that information. If there is a way to adjust the vanishing point accordingly, I'll really be happy to come to know about it.

Hell, this turns out to be a biggie... Edited by SeveQ

Share this post


Link to post
Share on other sites
Just to give an update on this...

I have managed to obtain the vanishing point by doing some calibration steps before anything else.

This includes positioning the Kinect as precisely as possible on a border between two field rows on the playing field. The vertical center of the captured image has to be aligned to the border the Kinect is placed on. After that's done, I can start the four point calibration by just klicking four field corners located symmetrically left and right of the vertical center line to form a trapezium.

That procedure provides me with the necessary information to calculate the vanishing point and henceforth any perspectively corrected x-coordinate of any given point in the image. After some further trigonometric calculations I get the point where the straight line between the vanishing point and the detected object intersect with the baseline (the black line in the attached image). And, voila, that's where the distance vector originates (the red blotch).

Phew! :cool:

I could need some further information!

  1. Does anybody know how to calculate the conversion factor between the depth information and the baseline coordinates? I mean, for example a length of 30 on the baseline isn't the same as a distance of 30.There has to be a conversion factor. How do I calculate it?
  2. I still need to correct the vanishing point's vartical location whenever the tilt/pitch of the Kinect changes. Any ideas?
Thanks a lot!

Share this post


Link to post
Share on other sites
Just to mention...

it's done. I have changed everything to do a real 4 point calibration. It now calculates the components of a projective translation vector 'x' by solving a linear equation system (Ax = y) from a given vector 'y' and a given matrix 'A'. Now I can translate any straight line from the playing field to a line on the camera image by using the components of 'x'. lapack and boost::numeric::ublas made solving the equation system a peace of cake.

I don't even need the depth information of the Kinect anymore. I could make use of it when I try to implement an algorithm that translates the coordinates into metric distances or something... Well, I'm definitely going to do something like that, maybe today, as we need to know precisely where the detected objects are on the playing field.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!