Advertisement Jump to content


This topic is now archived and is closed to further replies.


Converting worldspace to screenspace

This topic is 6389 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, I''m having a bit of troubles with the converting some objects in worldspace to screenspace in OpenGL. I know the way to do this is to multiply the view matrix by the camera matrix to get the transformation matrix, and then multiply that by your world space vector coordinate to get the screenspace coordinate (i think?). However, OpenGL has its ''Modelview'' matrix, and no pure view matrix. Basically, what I''m trying to do is a basic lens flare effect. I have the flare location in world space and need to get it''s screen space coordinates so i can project some flares along a line on the screen. Could someone help me out with getting the screenspace in OpenGL, it''d be much appreciated, thanks. ~David M. Byttow

Share this post

Link to post
Share on other sites
AFAIK (I''m more of a D3D guy) - GL''s ModelView is a concatenated World matrix and View matrix (World transforms from the objects own local space into world space, and the view matrix represents the orientation and position of the camera).

Simply multiply WorldView by the projection matrix (what you refer to as "the camera matrix")...

Matrix mWorldViewProject = mWorldView * mProjection;
Vector vout = v * mMworldViewProject;

If the projection matrix is homogenous (often they are), you''ll need to use a 4 element vector with w set to 1, then divide the resultant vector by w:

Vector4 vin = v;
vin.w = 1.0f;
Vector vout = vin * mWorldViewProject;
vout.x = vout.x / vout.w;
vout.y = vout.y / vout.w;
vout.z = vout.z / vout.w;

Simon O''''Connor
Creative Asylum Ltd

Share this post

Link to post
Share on other sites
Right, I thought it was something along those lines. However, with OpenGL, (and in D3D I think) you merely translate to the 'camera' position before you begin drawing the scene. Since the scene begins at an identity matrix does that translation make up your camera matrix? So, say in a given scene, I don't rotate but I translate the camera to vector3(50, 25, 0)

Would that make the camera matrix equal to
|1 0 0 50 |
|0 1 0 25 |
|0 0 1 0 |
|0 0 0 1 |

I'm assuming the camera matrix should be homogenous to multiply it by OpenGL's 4x4 modelview matrix.

So then it's a matter of filling the modelview matrix from OpenGL and my camera matrix to get

[t] = [v] * [c]
where t is the transformation matrix.

vout = vin * [t]
vout = vout / vout.w

That should give me the vector (vout) which would be screenspace coordinates?

Thanks for your help.

~David M. Byttow

Edited by - guitardave24 on July 21, 2001 5:19:10 PM

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!