Jump to content
  • Advertisement
Sign in to follow this  
AhmedSaleh

Converting 3D Points into 2D

Recommended Posts

I have a simple problem, just converting 3D points into 2D image coordinate, the image center should be 0,0 to -1,1 I have done the following equations with the help of @iedoc but I still don't get normalized points also another question how would I debug it, I only have the ability to draw spheres so I can't draw 2D circles First I have camera position and orientation as quaternion, I convert the quaternion to rotation matrix then I compose the camera pose matrix 4x4 that works and I tested it

const Ogre::Vector3 cameraPosition = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldPosition();
        const Ogre::Quaternion cameraOrientation =Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldOrientation();

        Ogre::Matrix4 cameraPose;

        Ogre::Matrix3 orienatationMatrix;

        cameraOrientation.ToRotationMatrix(orienatationMatrix);

        cameraPose[0][0] = orienatationMatrix[0][0];
        cameraPose[1][0] = orienatationMatrix[1][0];
        cameraPose[2][0] = orienatationMatrix[2][0];
        cameraPose[0][1] = orienatationMatrix[0][1];
        cameraPose[1][1] = orienatationMatrix[1][1];
        cameraPose[2][1] = orienatationMatrix[2][1];
        cameraPose[0][2] = orienatationMatrix[0][2];
        cameraPose[1][2] = orienatationMatrix[1][2];
        cameraPose[2][2] = orienatationMatrix[2][2];
        cameraPose[0][3] = cameraPosition.x;
        cameraPose[1][3] = cameraPosition.y;
        cameraPose[2][3] = cameraPosition.z;
        cameraPose[3][0] = 0;
        cameraPose[3][1] = 0;
        cameraPose[3][2] = 0;
        cameraPose[3][3] = 1;

        Ogre::Vector3 pos, scale;
        Ogre::Quaternion orient;

        cameraPose.decomposition(pos, scale, orient);

        std::vector<Ogre::Vector2> projectedFeaturePoints;
        Core::CameraIntrinsics cameraIntrinsics = Core::EnvironmentInformation::getSingleton()->getCameraIntrinsics();
        Core::Resolution screenResolution = Core::EnvironmentInformation::getSingleton()->getScreenResolution();
        Core::EnvironmentInformation::AspectRatio aspectRatio =  Core::EnvironmentInformation::getSingleton()->getScreenAspectRatio();

        Ogre::Matrix4 viewProjection = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraViewProjectionMatrix();

        for (int i = 0; i < out.numberofpoints; i++)
        {
            Ogre::Vector4 pt;
            pt.x = out.pointlist[(3*i)];
            pt.y = out.pointlist[(3*i) + 1];
            pt.z = out.pointlist[(3*i) + 2];
            pt.w = 1;

            Ogre::Vector4 pnt = cameraPose.inverse()*pt;

            float x = (((pnt.x - cameraPosition.x)*cameraIntrinsics.focalLength.x)/pnt.z) + cameraPosition.x;
            float y = (((pnt.y - cameraPosition.y)*cameraIntrinsics.focalLength.y)/pnt.z) + cameraPosition.y;

            projectedFeaturePoints.push_back(Ogre::Vector2(x,y));

        }

Share this post


Link to post
Share on other sites
Advertisement

There are some things that make understanding your post problematic:

8 hours ago, AhmedSaleh said:

the image center should be 0,0 to -1,1

A "center" is a point in space. How should "0,0 to -1,1" be understood in this context?

8 hours ago, AhmedSaleh said:

but I still don't get normalized points

You probably mean "a point in range [-1,+1]x[-1,+1]". Otherwise, you can normalize a point (with the meaning to make its homogeneous coordinate to be 1), but that is - again probably - not what you mean, is it?

A position can be given in an infinite amount of spaces. Because you're asking for a transformation of a position from a specific space into normalized space, it is important to know what the original space is. Your code snippet show "out.pointlist" without giving any hint in which space the points in that pointillist are given. Are they given in model local space, or world space, or what?

 

In the end I would expect that perhaps the model's world matrix and essentially the composition of the camera's view and projection matrixes are all that is needed to do the job. You already fetch viewProjection by invoking getCameraViewProjectionMatrix() (BTW a function I did not found in Ogre's documentation). What is wrong with that matrix? What's the reason you are not using it?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!