Converting 3D Points into 2D

Started by
1 comment, last by haegarr 6 years, 3 months ago

I have a simple problem, just converting 3D points into 2D image coordinate, the image center should be 0,0 to -1,1 I have done the following equations with the help of @iedoc but I still don't get normalized points also another question how would I debug it, I only have the ability to draw spheres so I can't draw 2D circles First I have camera position and orientation as quaternion, I convert the quaternion to rotation matrix then I compose the camera pose matrix 4x4 that works and I tested it

const Ogre::Vector3 cameraPosition = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldPosition();
        const Ogre::Quaternion cameraOrientation =Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldOrientation();

        Ogre::Matrix4 cameraPose;

        Ogre::Matrix3 orienatationMatrix;

        cameraOrientation.ToRotationMatrix(orienatationMatrix);

        cameraPose[0][0] = orienatationMatrix[0][0];
        cameraPose[1][0] = orienatationMatrix[1][0];
        cameraPose[2][0] = orienatationMatrix[2][0];
        cameraPose[0][1] = orienatationMatrix[0][1];
        cameraPose[1][1] = orienatationMatrix[1][1];
        cameraPose[2][1] = orienatationMatrix[2][1];
        cameraPose[0][2] = orienatationMatrix[0][2];
        cameraPose[1][2] = orienatationMatrix[1][2];
        cameraPose[2][2] = orienatationMatrix[2][2];
        cameraPose[0][3] = cameraPosition.x;
        cameraPose[1][3] = cameraPosition.y;
        cameraPose[2][3] = cameraPosition.z;
        cameraPose[3][0] = 0;
        cameraPose[3][1] = 0;
        cameraPose[3][2] = 0;
        cameraPose[3][3] = 1;

        Ogre::Vector3 pos, scale;
        Ogre::Quaternion orient;

        cameraPose.decomposition(pos, scale, orient);

        std::vector<Ogre::Vector2> projectedFeaturePoints;
        Core::CameraIntrinsics cameraIntrinsics = Core::EnvironmentInformation::getSingleton()->getCameraIntrinsics();
        Core::Resolution screenResolution = Core::EnvironmentInformation::getSingleton()->getScreenResolution();
        Core::EnvironmentInformation::AspectRatio aspectRatio =  Core::EnvironmentInformation::getSingleton()->getScreenAspectRatio();

        Ogre::Matrix4 viewProjection = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraViewProjectionMatrix();

        for (int i = 0; i < out.numberofpoints; i++)
        {
            Ogre::Vector4 pt;
            pt.x = out.pointlist[(3*i)];
            pt.y = out.pointlist[(3*i) + 1];
            pt.z = out.pointlist[(3*i) + 2];
            pt.w = 1;

            Ogre::Vector4 pnt = cameraPose.inverse()*pt;

            float x = (((pnt.x - cameraPosition.x)*cameraIntrinsics.focalLength.x)/pnt.z) + cameraPosition.x;
            float y = (((pnt.y - cameraPosition.y)*cameraIntrinsics.focalLength.y)/pnt.z) + cameraPosition.y;

            projectedFeaturePoints.push_back(Ogre::Vector2(x,y));

        }
Game Programming is the process of converting dead pictures to live ones .
Advertisement

There are some things that make understanding your post problematic:

8 hours ago, AhmedSaleh said:

the image center should be 0,0 to -1,1

A "center" is a point in space. How should "0,0 to -1,1" be understood in this context?

8 hours ago, AhmedSaleh said:

but I still don't get normalized points

You probably mean "a point in range [-1,+1]x[-1,+1]". Otherwise, you can normalize a point (with the meaning to make its homogeneous coordinate to be 1), but that is - again probably - not what you mean, is it?

A position can be given in an infinite amount of spaces. Because you're asking for a transformation of a position from a specific space into normalized space, it is important to know what the original space is. Your code snippet show "out.pointlist" without giving any hint in which space the points in that pointillist are given. Are they given in model local space, or world space, or what?

 

In the end I would expect that perhaps the model's world matrix and essentially the composition of the camera's view and projection matrixes are all that is needed to do the job. You already fetch viewProjection by invoking getCameraViewProjectionMatrix() (BTW a function I did not found in Ogre's documentation). What is wrong with that matrix? What's the reason you are not using it?

This topic is closed to new replies.

Advertisement