Sign in to follow this  

3D to 2D space

Recommended Posts

Hi! I'm trying to fix my font class so I can pass 3D positions to the print function, and then they are projected on to the screen. (3D to 2D space) However, I can't make it work :( I guess my math is a bit rusty. This is the best i can do:
void crFont::Print(V3D *pPos, char *pText, ...)
	crText		pNewText;
	crMatrix	ViewMatrix;
	V3D			vTransformed;

	glGetFloatv(GL_MODELVIEW_MATRIX, ViewMatrix.m_pMatrix);
	ViewMatrix.TransformPoint(&vTransformed, pPos);
	//if(vTransformed.z > 0.0f)
		va_list List;
		va_start(List, pText);
		vsprintf(pNewText.m_pText, pText, List);

		//OBS! Screen coords is 0-100.
		pNewText.m_vPos.x = vTransformed.x * (1.0f / vTransformed.z) * 100.0f;
		pNewText.m_vPos.y = vTransformed.y * (1.0f / vTransformed.z) * 100.0f;

		pNewText.m_iSet		= 0;
		pNewText.m_vScale	= V2D(0.08f, 0.15f);

		float pColor[4] = {0.9f, 0.9f, 0.9f, 1.0f};
		memcpy(pNewText.m_pColor, pColor, sizeof(pNewText.m_pColor));


Plz help me.

Share this post

Link to post
Share on other sites
I'm not 100% sure of what you're doing, but it sounds like you might want to render the text to a texture and then use that on a quad. The quad is in 3D space, so you can then give it 3D coordinates.

Sorry if this isn't what you're looking for.

Share this post

Link to post
Share on other sites
This is easy.

Supposing that the target 2D plane lies from (0,0) to (1,1) then you do this:

- Divide the x and y components by the w component.
- multiple x and y by 0.5 and add 0.5 to each.

That should do it. I have only dealt with this in a shader so the same principle should apply.


Share this post

Link to post
Share on other sites
The goal is to project the point on the "Z near" camera plane, get the x/y coordinates of that point, divide them respectively by the width and height of the visible volume on the "Z near" plane, and then adjust those normalized coordinates to the dimensions of your viewport.

One thing at a time:
1. You obviously know the position of the camera, and you can find its local axes by extracting the first three row vectors from your view matrix. These vectors are the 3d expressions of "right (X+)", "up (Y+)", and "farther (Z+)" in your viewport.

2. Transform the point of interest, P, by the view matrix with w==1, and divide all components by its new w. The new point P', is P expressed in the 3d coordinates system implied by the camera.

3. Let P' = {a,b,c}. this means that the world coordinates of P are expressed as: P = CamPos + a*CamX + b*CamY + c*CamZ. The coordinates {a,b} are the equivalent of the {x,y} coordinates of any point on a regular 2d plane.
For these coordinates to be visible in your viewport, the following must hold:
-ProjectionMatrix(1,1)/2 < a < ProjectionMatrix(1,1)/2
-ProjectionMatrix(2,2)/2 < b < ProjectionMatrix(2,2)/2
because the width of the visible space on the near Z plane is the member (1,1) of the projection matrix, and analogosly for its height...

4. Divide "a" by ProjectionMatrix(1,1) and "b" by ProjectionMatrix(2,2), and add 0.5. Now the coordinates have been normalized to the range (0,1)
Simply multiply them by the pixel width and height of your viewport, and you're ready.

I've made a mistake... The coordinates {a,b} must not be normalized to the width and height of the volume *on the "near Z" plane*, but to the width and height of the view frustum at camera Z == "c".
In short, divide "a" by c*ProjMat(1,1)/Znear, and "b" by c*ProjMat(2,2)/Znear

(At local Z == Znear, View width == ProjectionMatrix(1,1)
At local Z == c, View width == c*ProjectionMatrix(1,1)/Znear)

Share this post

Link to post
Share on other sites
On question... Step 2.
Do you mean:

My point:
P = {x, y, z, w = 1}

My model view matrix:

My new P:
P2 = {P.x / P.w, P.y / P.w, P.z / P.w}

Share this post

Link to post
Share on other sites

My new P:
P2 = {P.x / P.w, P.y / P.w, P.z / P.w}

Yes, that's exactly what I meant.
I am not familiar with openGL syntax, but if the vector you wish to transform represents a point, it must be transformed with w==1. This will cause its "w" to change, and the result must be projected back to w==1 in order for it to actually represent the transformed point in the original space.

The other case is when you project a vector as a direction, (which cannot be affected by translation), and it should be performed with "w"==0. (It will remain zero after transformation)

Share this post

Link to post
Share on other sites
I got it.
The following code is tested in DX and works.

// vG is the global vector to project. vP is its projection in camera space
D3DXVECTOR4 vP = D3DXVECTOR4( vG.x*matView._11 + vG.y*matView._21 + vG.z*matView._31 + vG.w*matView._41,
vG.x*matView._12 + vG.y*matView._22 + vG.z*matView._32 + vG.w*matView._42,
vG.x*matView._13 + vG.y*matView._23 + vG.z*matView._33 + vG.w*matView._43,
vG.x*matView._14 + vG.y*matView._24 + vG.z*matView._34 + vG.w*matView._44 );

vP /= vP.w;

// This is the point in pixel coordinates. ResX, ResY are the resolution of the render target
vS.x = ResX*( .5f*( 1.f + matProj._11*vP.x/vP.z));
vS.y = ResY*(-.5f*(-1.f + matProj._22*vP.y/vP.z));

To port it to openGL, just calculate vP using the symmetrical members -in your view matrix-, of the ones I use.

Share this post

Link to post
Share on other sites
That's good... :)

Notice that the standard way to this, is to transform vP by (matProj*matView), (not just by matView) and then calculate vS as
vS= { ResX*.5f*(1.f+vP.x), ResY*.5f*(1.f-vP.y) }, however you can save a matrix product (in exchange for a couple of scalar products and 2 divisions) if you write it as I showed... I suppose it's a little faster this way...

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this