Screen to World Ray

Started by
28 comments, last by GuyWithBeard 11 years, 4 months ago
@Tispe: This code is dealing with model transformation, I'm not doing anything related to models, instead, I'm trying to raycast from the center of the camera to the forward area.

I'm looking to do something exactly like the following:
http://forum.unity3d.com/threads/37637-RayCasting-from-screen-centre-point-(cross-hair)
Advertisement

Focusing on number 1 (player shooting raycasting) I tried doing something close to what you said, but it's not working as expected, can you show an example?


This is how I do it in my engine:

void Camera::getPickRay(int screenX, int screenY, Vec3& rayOrigin, Vec3& rayDirection, Math::CoordinateSpace space)
{
float vx = (((2.0f * screenX) / (float)mGraphicsEngine->getScreenWidth()) - 1.0f ) / mProjectionMatrix._11;
float vy = (((-2.0f * screenY) / (float)mGraphicsEngine->getScreenHeight()) + 1.0f ) / mProjectionMatrix._22;
switch (space)
{
case Basis::Math::SpaceLocal:
rayOrigin.set(0.0f, 0.0f, 0.0f);
rayDirection.set(vx, vy, 1.0f);
rayDirection.normalize();
break;
case Basis::Math::SpaceWorld:
Mat4& worldMatrix = mParentNode->getLocalToWorldTransform();
Vec4 o(0.0f, 0.0f, 0.0f, 1.0f);
Vec4 d(vx, vy, 1.0f, 0.0f);
o = o * worldMatrix;
d = d * worldMatrix;
rayOrigin.set(o);
rayDirection.set(d);
rayDirection.normalize();
break;
}
}


Basically you give the function the screen coordinates, a ray origin vector, a ray direction vector and the space you want the ray in (world or local). Since this is the camera we are talking about, the local space is effectively the view space.

So, first you get the ray in view space. For that we need the projection matrix of the camera, the screen coordinates and the screen size. If you are cool with with view space you just fill in the vectors and you are good to go.

If you want the ray in world space you need to transform it with the world matrix, aka "local-to-world-matrix", aka "view-to-world-matrix". If you happen to have the "world-to-view-matrix" instead, like most camera implementations, the one you want is just the inverse of that. You fill out two 4D vectors (the origin with W=1, the direction with W=0) and transform them by the matrix. After that you should probably normalize the direction.

Voila, you now have the "start-point" of the ray (which is still just the camera position btw) and a unit-length direction vector of the ray, both in world space. Multiply the direction with whatever huge number you want to get a long ray cast.

Does this help?

@Tispe: This code is dealing with model transformation, I'm not doing anything related to models, instead, I'm trying to raycast from the center of the camera to the forward area.


The matrices are not only for mesh/models. You use them in inverse to get from screen coordinates (your crosshair) to world space, origin and direction vectors.
Yes, if you look closely at Tispe's code you'll notice that it does exactly the same as mine. With the added bonus that it probably compiles for you out of the box, if you use D3DX.
Sorry for the delay, I have tried to implement the idea from the above code but couldn't get it to work, If you can just show example on getting the two values RayFrom and RayTo it would be greatly appreciated:

float distance = 1000.0f;
D3DXVECTOR RayFrom; // According to the screen "+" symbol (middle of the screen)
D3DXVECTOR RayTo; // According to distance
Since your "+" symbol (cross-hair) is in the middle of the screen xAngle and yAngle are both 0.

You only need this code

D3DXVECTOR3 origin, direction;
origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
direction = D3DXVECTOR3(0, 0, 1.0f);
// find the inverse matrix
D3DXMatrixInverse(&matInverse, NULL, &(matWorld * matView)); //D3DXMatrixInverse(&matInverse, NULL, &matView); <- world space instead.
// convert origin and direction into model space
D3DXVec3TransformCoord(&origin, &origin, &matInverse);
D3DXVec3TransformNormal(&direction, &direction, &matInverse);
D3DXVec3Normalize(&direction, &direction);


You can remove matWorld and you will get the origin and direction in world space. By keeping matWorld you reverse the ray into the model space of the mesh. This is useful if you want to check if the mesh is intersecting with the ray.
Okay, I tried the following:

D3DXVECTOR3 origin, direction;
origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
direction = D3DXVECTOR3(0, 0, 1.0f);
// find the inverse matrix
D3DXMATRIX matInverse;
D3DXMATRIX matView = camera->GetViewMatrix();
D3DXMatrixInverse(&matInverse, NULL, &matView); //D3DXMatrixInverse(&matInverse, NULL, &matView); <- world space instead.
// convert origin and direction into model space
D3DXVec3TransformCoord(&origin, &origin, &matInverse);
D3DXVec3TransformNormal(&direction, &direction, &matInverse);
D3DXVec3Normalize(&direction, &direction);
DWORD hitResult = Raytest(origin, direction);
// Raytest() should return the ID of the mesh that was hit when the player is shooting


What's wrong in the above code? it's not hitting the correct mesh, also where is the distance? I'm trying to get hitResult so I can know which mesh was hit then I will apply damage to that mesh.
The Ray just has an origin and a direction, it goes on for infinity. When you do an intersect test you provide the mesh and the ray, and the function tests if the Ray intersects that mesh and at what distance away from the origin. You should use D3DXIntersect() unless you you completly trust your Raytest() function.

I think you can't test if you hit a mesh in World Space. You need to include matWorld when you reverse the ray from screen to model space. That way the Ray will be in the Model space, and only there can you test if the Ray intersect any polygons of the mesh.

So you gotta do this Ray test for each mesh, that means reverse transform the Ray for each model using a new matWorld.
I think you can't test if you hit a mesh in World Space. You need to include matWorld when you reverse the ray from screen to model space.[/QUOTE]

That's alittle confusing, I just use Bullet Physics Raytest which create a line in the world space and I can use it to get the closest hit and it return information about which mesh that was hit, I do nothing other than giving the start and end point and it tells me which mesh was intersected.

I think you can't test if you hit a mesh in World Space.


As far as I know, Bullet expects the ray origin and direction in world space. Also, I don't see why you could not do that in any case. Spaces are just different frames of reference. The important thing is that all data is in the same space.

A few questions for Medo3337:

1. Do your physics meshes have the exact same geometry as your visual meshes? Most games use vastly simplified physics meshes compared to their visual counterparts. This of course means that doing ray casts with the physics engine will not give you the exact face you are seeing on the screen, but the face of the physics mesh. Also, I don't remember if Bullet gives you faces at all, or just the rigid body, but you probably know that better than me.

2. If you have problems getting the right ray, have you tried drawing a line where it is going, to see if it is correctly lined up? Of course you cannot see the ray as you are casting it, since you are effectively looking along the ray itself, so it would appear as just a dot. Instead you can set up your game to cast a ray when you left click, or something like that. The ray info would then be stored in two vectors and drawn on the screen until you cast the next ray. That way you will have time to move your camera a bit to the side so that you can see the ray.

This topic is closed to new replies.

Advertisement