• Advertisement
Sign in to follow this  

Normalized Device Co-ordinates to World Space

This topic is 1979 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I did write a wonderfully detailed post about my problem, then accidentally closed the tab. So this'll be a little more brief, but hopefully still detailed enough to get some help!

Anyway, I'm trying to convert mouse co-ordinates to world space. At the moment, I'm passing in two normalized device co-ordinates, (x,y,-1.0f) and (x,y,1.0f), then transforming them by the inverse of (proj_matrix*view_matrix). I'm expecting to get two points - one on the near clipping plane and one on the far clipping plane, but I'm not. The near plane is 30, and the far plane is 5000, but I'm getting z values of 0.7 and 7 respectively.

I'm not doing any multiplication by any w value to get to clipping space - could that be the problem? If so, how should I get the w value to multiply all the elements by?

Here's the bits of my code that are relevant:

[code]Ray newray(0.2f,-0.2f,-1.0f,0.2f,-0.2f,1.0f);
newray.SetMatrices(cam_->GetProjectionMatrix(),cam_->GetViewMatrix());
newray.Calculate();[/code]

[code]class Ray
{
cml::matrix44f_c inv_mat_;
vector3f start_, end_;

vector3f transformed_start_, transformed_end_;
public:
Ray(float sx, float sy, float sz, float dx, float dy, float dz);
void SetRayEnds(float sx, float sy, float sz, float dx, float dy, float dz);

void SetMatrices(const cml::matrix44f_c & proj, const cml::matrix44f_c & view);
void Calculate();

vector3f GetYIntersection(float y);
};[/code]

[code]Ray::Ray(float sx, float sy, float sz, float dx, float dy, float dz) :
inv_mat_(cml::identity_4x4()),
start_(sx,sy,sz),
end_(dx,dy,dz)
{

}
void Ray::SetRayEnds(float sx, float sy, float sz, float dx, float dy, float dz)
{
start_.set(sx,sy,sz);
end_.set(dx,dy,dz);
}

void Ray::SetMatrices(const cml::matrix44f_c & proj, const cml::matrix44f_c & view)
{
inv_mat_ = cml::inverse(proj*view);
}

void Ray::Calculate()
{
transformed_start_ = cml::transform_point(inv_mat_, start_);
transformed_end_ = cml::transform_point(inv_mat_, end_);
}[/code]

To all the matrix and graphics wizards, what am I doing wrong? Is this the way that you would approach the problem?

Thanks for your help!
Ben

EDIT: World space, not eye space. Edited by LordSputnik

Share this post


Link to post
Share on other sites
Advertisement
[quote]normalized device co-ordinates, (x,y,-1.0f) and (x,y,1.0f)[/quote]You are passing x and y in the range [-1.0,1.0], right?

Also, if you want to transform to "eye space", you only want to hit it with the inverse projection matrix. The inverse view matrix would transform from eye space to world space.

The perspective divide normalizes the x,y,z coordinates by w so that the homogeneous coordinate falls on the subspace w = 1. You shouldn't have to undo this.

Essentially, what you want to do is what gluUnProject does, but without the modelview matrix. Have a look at the pseudocode on the man page [url="http://www.opengl.org/sdk/docs/man/xhtml/gluUnProject.xml"]http://www.opengl.org/sdk/docs/man/xhtml/gluUnProject.xml[/url].

Share this post


Link to post
Share on other sites
Very sorry, I did mean world space - it was late and I was sleepy!

The only thing I can think it could be is the cml::transform_point function I'm using - I've assumed that this multiplies the input point by the matrix as gluUnProject does, but that might not be right. I'll try making a 4x1 matrix containing the values plus the 1 at the bottom, and use straightforward matrix multiplication.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement