• Advertisement
Sign in to follow this  

Unity Ray picking without D3DXIntersect

This topic is 4899 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am having some problems with ray picking in DirectX. I am using it for a 3d map editor; the objects to be selected are not meshes, but simple vertex-defined quads with textures. I cannot fully emulate the pick example in the SDK; while it was useful for demonstrating the principles underlying the process, it is designed for meshes, and uses the D3DXIntersect function which takes a mesh as an argument. I have picked one of the quads as a test object and am trying to get the program to pick that object when the mouse is over it. I am using a plane intersection check, but the intersection vector is always wrong. I am coding it like this. Apologies for the slightly lengthy nature, but might as well make sure all the bases are covered:
POINT ptCursor;

GetCursorPos(&ptCursor);
ScreenToClient(g_hWndGalaxy, &ptCursor);
	
D3DXVECTOR3 vecMouse, vecFar, vecIntersect, vecScreen, vecObject(g_x, g_y, g_z); 

// g_x, etc. are the world matrix    
// coordinates of an object I am using
// as a test object

D3DXMATRIX matWorld, matProj, matView, matViewInv, matProjInv;

D3DVIEWPORT9 vpScreen;
	
g_pd3dDevice->GetTransform(D3DTS_WORLD, &matWorld);
g_pd3dDevice->GetTransform(D3DTS_VIEW, &matView);
g_pd3dDevice->GetTransform(D3DTS_PROJECTION, &matProj);
g_pd3dDevice->GetViewport(&vpScreen);

vecMouse.x = float (ptCursor.x);
vecMouse.y = float (ptCursor.y);
vecMouse.z = 0.001f;

D3DXVec3Unproject(&vecMouse, &vecMouse, &vpScreen, &matProj, 
                  &matView, &matWorld);
	
vecMouse.z = 0.99f;

D3DXVec3Unproject(&vecFar, &vecMouse, &vpScreen, &matProj,   
                  &matView, &matWorld);

D3DXPLANE plZplane;
D3DXVECTOR3 v1, v2, v3;
	
D3DXPlaneFromPoints( &plZplane, 
                       &D3DXVECTOR3( 0, 0, g_z ),
                       &D3DXVECTOR3( 0, -1, g_z ),
                       &D3DXVECTOR3( 1, 0, g_z ) ); 

// It seemed initially that drawing the plane at the z 
// coordinates of the object I'm testing would be best. 
// However, I have a feeling that might be wrong.
               

D3DXPlaneIntersectLine(&vecIntersect, &plZplane, &vecMouse,     
                       &vecFar);



The problem is very simple: vecIntersect, even though the plane is being drawn at the same Z coordinate of the test object, is always way, way off from the object's actual world coordinates (g_x, g_y, and g_z). I haven't been able to notice any pattern in the discrepancy either, it's just way off. I can't for the life of me figure why. A vector is derived from the mouse position, another vector down the z axis with the same x y coordinates, a plane is drawn at the z coordinate of the test object I'm trying to check, and it checks for an intersection. But so far no dice, which is perplexing because I think the way I've done it is pretty much identical to the plane/intersection method for picking that I've seen in other threads and tutorials. It seems that, for some reason, the mouse coordinates are just not gelling with the world coordinates of the object; it's as if the mouse and the object I'm trying to check are in two different world transforms, but I have checked and made absolutely certain that the world transform is constant. I'm follow the method described by Zaei (I'm assuming from the signature) in the last post on this thread: http://www.gamedev.net/community/forums/topic.asp?topic_id=48728 Any ideas or suggestions? I am probably missing something very obvious and silly. Thanks, sayfadeen [Edited by - sayfadeen on September 21, 2004 4:10:08 PM]

Share this post


Link to post
Share on other sites
Advertisement
One obvious thing I have noticed with this is that x,y coordinates of the untransformed mouse position do not match up with the x,y coordinates of the object which is under it. Even with a perspective view, shouldn't the coordinates match up (I'm just thinking about it visually here. If I'm standing here and facing directly at an object at a distance and we are aligned horizontally, doesn't this by definition mean that my "x coordinate" and its "x coordinate" should be the same?). I have also noticed that the untransformed x,y of vecMouse and vecFar are different from each other, even though the original x,y is the same (ptCursor.x, ptCursor.y). Is this supposed to happen because of the projection/perspective?. If this is supposed to happen, then clearly I'm missing something about how projection works (I'm going to RTFM again anyways). But if this is not supposed to happen, then what to do to scale the untransformed mouse coordinates in such a way that when the mouse cursor is over an object, the mouse x,y will match the object's x,y?

[Edited by - sayfadeen on September 21, 2004 3:01:30 PM]

Share this post


Link to post
Share on other sites
Yeah, you should keep the projected x,y and z coords as you do.
What bothers me, is:

Quote:

D3DXVec3Unproject(&vecMouse, &vecMouse, &vpScreen, &matProj,
&matView, &matWorld);

vecMouse.z = 0.99f;

D3DXVec3Unproject(&vecFar, &vecMouse, &vpScreen, &matProj,
&matView, &matWorld);


You are overwriting vecMouse in the first call, then you are using it again to compute vecFar. Reverse order of those calls and it shold be "much better" ;)

Also, your plane is perpendicular to XY plane in world coordinates, not in screen coords. What you could do to construct the plane you want, is:

- take a vector(0.0f, 0.0f, 0.5f) in the meaning of screen coords.
- unproject it, then it would represent the "center"(or so) of the original camera frustum.
- substract it from your camera's position, then you will get your camera's direction(or you can just take your camera's direction from somewhere else, that would be much simplier)
- normalize the direction and create the plane from that normal and (g_x, g_y, g_z).

I don't know if I understood your problem right, but hopes it is of some help.
/def

Share this post


Link to post
Share on other sites
Yeah, that helps, and thank you very much for your response. I'm still a bit unclear about a couple points, being quite a newbie at this.

To get the camera position, should I just take the ._41, ._42, ._43 values from the view matrix (or possibly the inverted matrix)? And would I just use D3DXVec3Normalize on the camera position? I've tried it these different ways but no success, so I'm probably doing it quite wrong.

[Edited by - sayfadeen on September 22, 2004 4:47:21 PM]

Share this post


Link to post
Share on other sites
Quote:

should I just take the ._41, ._42, ._43 values from the view matrix

Just out of curiosity, where did you get your view matrix from? I always construct it from camera's position and it's rotation(which gives me its look direction).
Anyway, yes you inverse the matrix first, so as you move in a direction, it's like all your objects were moving in the opposite. If you rotate left, it's like your world rotated right and you stood still the entire time(it is not _that_ simple, but taking translations and rotations independently, it can be done by hand like that).

Then you could take those values(fourth column). This would be _position_, so you didn't want to normalize this.
Another story with direction. You could also accuire it from inversed view matrix(it's third column, as it determines transformation of your Z axis to your new Z axis, which is, in the end, just where camera looks). You don's need to normalize it, as it is normalized already(if it wasn't your view in z-dir would be somewhat streched).
But much better(and cleaner) it would be if you tracked the way your view matrix is _born_, then you could have all the needed values directly.

Try it out.
/def

PS: I never, ever had to use inversion of a matrix in any of my DX3D programs. Not that I _avoided_ it or so. It is just not necesarry, IMO.

Share this post


Link to post
Share on other sites
D3DXIntersectTri

You just give it a triangle, not a mesh. The rest is up to you but I'm sure you can figure it out from here.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well they both worked! I'm surprised there's two so different ways of going the same thing.

Excellent work, I really appreciate the help. I was starting to despair for a minute there.

Kind regards,
Sayfadeen

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By bryandalo
      Good day,

      I just wanted to share our casual game that is available for android.

      Description: Fight your way from the ravenous plant monster for survival through flips. The rules are simple, drag and release your phone screen. Improve your skills and show it to your friends with the games quirky ranks. Select an array of characters using the orb you acquire throughout the game.

      Download: https://play.google.com/store/apps/details?id=com.HellmodeGames.FlipEscape&hl=en
       
      Trailer: 
       
    • By Manuel Berger
      Hello fellow devs!
      Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed.
      My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle.

      In Pseudo-Code:
      angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis
      index = (int) (angle / (PI / 5));
      PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down

      Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle.

      What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions.

      I can't get my head around it. Any suggestions? Is the whole approach doomed?

      Thank you in advance for any input!
       
    • By khawk
      Watch the latest from Unity.
       
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By Ovicior
      Hey,
      So I'm currently working on a rogue-like top-down game that features melee combat. Getting basic weapon stats like power, weight, and range is not a problem. I am, however, having a problem with coming up with a flexible and dynamic system to allow me to quickly create unique effects for the weapons. I want to essentially create a sort of API that is called when appropriate and gives whatever information is necessary (For example, I could opt to use methods called OnPlayerHit() or IfPlayerBleeding() to implement behavior for each weapon). The issue is, I've never actually made a system as flexible as this.
      My current idea is to make a base abstract weapon class, and then have calls to all the methods when appropriate in there (OnPlayerHit() would be called whenever the player's health is subtracted from, for example). This would involve creating a sub-class for every weapon type and overriding each method to make sure the behavior works appropriately. This does not feel very efficient or clean at all. I was thinking of using interfaces to allow for the implementation of whatever "event" is needed (such as having an interface for OnPlayerAttack(), which would force the creation of a method that is called whenever the player attacks something).
       
      Here's a couple unique weapon ideas I have:
      Explosion sword: Create explosion in attack direction.
      Cold sword: Chance to freeze enemies when they are hit.
      Electric sword: On attack, electricity chains damage to nearby enemies.
       
      I'm basically trying to create a sort of API that'll allow me to easily inherit from a base weapon class and add additional behaviors somehow. One thing to know is that I'm on Unity, and swapping the weapon object's weapon component whenever the weapon changes is not at all a good idea. I need some way to contain all this varying data in one Unity component that can contain a Weapon field to hold all this data. Any ideas?
       
      I'm currently considering having a WeaponController class that can contain a Weapon class, which calls all the methods I use to create unique effects in the weapon (Such as OnPlayerAttack()) when appropriate.
  • Advertisement