Jump to content

  • Log In with Google      Sign In   
  • Create Account


Quaternions and Lat/Lon


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Jack Smith   Members   -  Reputation: 100

Like
0Likes
Like

Posted 20 May 2011 - 09:40 PM

Hi all,

For the past few days I have been struggling with a problem concerning a graphics program I am writing. It displays a 3d sphere with lat/lon coordinates for the user to see. An overlay of the US is also shown, with true 3d mapping. Everything works up to this point.

Here is what I have:

* The globe can be zoomed in and out by adjusting the horizontal field of view angle
* The globe can also be rotated
* The latitude and longitude are shown for the point that the screen is centered on (this works correctly)

Now, here is what I want to have:

* Show the latitude and longitude for the point where the mouse cursor is centered on (having issues with this)

Now, this is not like in other posts where people ask, how do I unproject a 2d point (the mouse coordinates) to 3d. No. This is different. Instead, I am asking about retrieving the latitude and longitude at the mouse coordinates, and this is feasible since we know the size of the sphere in its 3d space as well as the x,y coordinates and field of view.

So, here is what I have done so far.

I have managed to correctly calculate the change in horizontal angle and change in vertical angle from the center. This is accomplished by way of using hx=tan(fov/2) * d, where d is the distance to the center of the sphere. hx gives me the horizontal length at the current field of view (which necessary since, again, the sphere can be zoomed in and out). And then taking arcsin(((cursor.x-400)*(hx/400))/SPHERE_RADIUS) to get the change in horizontal angle based on horizontal change from the center of the screen (i.e., mouse cursor x movement from center).

So you might think, okay, he's got the latitude and longitude of what the screen is centered on, as well as the change in vertical and horizontal angles. So what's the problem? Can't he just subtract the horizontal angle change from the longitude, and similar for latitude?

As it turns out, it's not that simple. For example, when the user is looking directly at latitude 90 degrees (North pole), then any mouse deviation from the center of the screen necessitates a change in latitude even if only horizontal mouse movement has taken place. But then, when looking at latitude 0 degrees (the equator), then horizontal mouse movement will only affect longitude.

Herein lies the problem: what to do when the user is looking at between those two extremes? Say, latitude 45 degrees? I had managed to use cos(latitude)*(change in vertical) and also add to that the sin(latitude)*(combination of latitude and longitude) to mix the two.. And eventually got it working at lat=0 and lat=90. But never for in-between.

Going back to how I arrived at displaying the centered point latitude and longitude in the first place. I use quaternion for the rotation:

// Get orig quaternion
    	LatLonToQuaternion(g_angRotateX, g_angRotateY, x, y, z, w);

That gives me a quaternion from my centered position's latitude and longitude. Yes the lat/lon are flipped in the function's input. But the output quaternion is correct, I have verified.

So then, now that I have a quaternion... I should be able to multiply that by my change in latitude and longitude, yes?

// Rotate by lonX
    	RotationXYZToQuaternion(0.0f, lonX, 0.0f, tx, ty, tz, tw);
    	QuaternionMultiply(tx, ty, tz, tw, x, y, z, w, ux, uy, uz, uw);
    	x=ux;
    	y=uy;
    	z=uz;
    	w=uw;

    	// Rotate by latX
    	RotationXYZToQuaternion(latX, 0.0f, 0.0f, tx, ty, tz, tw);
    	QuaternionMultiply(tx, ty, tz, tw, x, y, z, w, ux, uy, uz, uw);
    	QuaternionToLatLon(ux, uy, uz, uw, lonF, latF);


The above code uses a function called RotationXYZToQuaternion to give me a quaternion from a given X,Y,Z rotation. So I first get a quaternion for the change in horizontal angle. Then I multiply that quaternion by the original quaternion that I derived from the lat/lon at the center of the screen. Next, I do the same thing for the change in vertical angle, and finally convert the result back into a latitude and longitude.

In this case the above code works at lat=0, but not at lat=90 or anywhere else in that I do not get back the expected latitude and longitude for where the user is pointing the mouse cursor.

So, this is where I am after about four days of struggling with it -- I have managed to get this far. But am really stuck.

Basically it boils down to this problem:

Given:

x=longitude at screen center
y=latitude at screen center
qx=change in the sphere's horizontal angle, as calculated from the deviation of the mouse cursor's x position from screen center (*not the same thing as change in longitude, unless lat=0)
qy=change in the sphere's vertical angle, as calculated from the deviation of the mouse cursor's y position from screen center (*also not the same as change in latitude, unless lat=0)

Determine the new latitude and longitude.

So I ask, if anyone can help me on this, how to solve the problem. What do I do when the user is not centered at latitude=0? How is the change in vertical and horizontal angles applied when latitude=30 degrees, for example?

Any help would be immensely appreciated!

Thanks.

Sponsor:

#2 scgames   Members   -  Reputation: 1969

Like
1Likes
Like

Posted 20 May 2011 - 11:00 PM

I admit that I'd have to read that again (and maybe again after that ;) in order to understand what it is you're doing, but meanwhile, let me ask this. If the objective is to determine the latitude and longitude corresponding to the cursor position, why not simply raycast against the sphere and compute the latitude and longitude from the intersection point?

#3 haegarr   Crossbones+   -  Reputation: 3728

Like
1Likes
Like

Posted 21 May 2011 - 02:21 AM

...
Now, this is not like in other posts where people ask, how do I unproject a 2d point (the mouse coordinates) to 3d. No. This is different. Instead, I am asking about retrieving the latitude and longitude at the mouse coordinates, and this is feasible since we know the size of the sphere in its 3d space as well as the x,y coordinates and field of view.
...

Well, it is AFAIS not really different; it just needs 2 more steps. So I second jyk's implicit suggestion:

1. Compute a ray in global space that describes all locations that are projected onto the pixel covered by the mouse pointer's hot spot.
2. Transform the ray using the inverse of the local-to-global transformation of the sphere.
3. Compute the nearest intersection point of the ray with the sphere. This gives you a Cartesian co-ordinate.
4. Convert the Cartesian co-odinate into a spherical one.
5. Adapt the azimuthal angle to yield in the latitude.

The above should be sufficient as long as a sphere is used. When using an ellipsoid, a non-uniform scaling incorporated into the object's transformation should do the trick, although the term longitude / latitude becomes versatile; you have to decide which kind of longitude / latitude to use then. (But if you want to use the spherical harmonics series to approximate the real shape of the earth ...)

#4 Jack Smith   Members   -  Reputation: 100

Like
0Likes
Like

Posted 21 May 2011 - 10:14 AM

haegarr, jyk:

Thank you for your replies. From what I understand, ray intersection functions provided by Direct3D (the graphics library I am using) work with polygons. So for example, my sphere would need to be made out of (say, triangles) to compute the intersection from the ray to the sphere.

Is this correct?

See, my "sphere" is actually just the shape to which my 3d map of line segments conforms. In other words, I do not have any triangles that would intersect against the ray. Even if I did, I think I would lose accuracy because only if my sphere were made of trillions of triangles would I get any accuracy as to which lat/lon the mouse cursor has hit on.

So, if I do go the route of ray intersection.. It would be nice, I admit, since maybe that will get me what I want. But I need some help on implementing it.

What I gather at this point is that I can get a 3d ray, and would need to come up with the Cartesian coordinate at which it intersects with the sphere. So if I know the radius of the sphere in its 3d space, and am given a 3d ray, how would I get these coordinates of intersection?

Thanks again.

#5 scgames   Members   -  Reputation: 1969

Like
1Likes
Like

Posted 21 May 2011 - 11:21 AM

From what I understand, ray intersection functions provided by Direct3D (the graphics library I am using) work with polygons. So for example, my sphere would need to be made out of (say, triangles) to compute the intersection from the ray to the sphere.

Is this correct?

Nope; you can compute the intersection with the sphere directly (you don't have to use a mesh representation).

So if I know the radius of the sphere in its 3d space, and am given a 3d ray, how would I get these coordinates of intersection?

Google/search for (e.g.) 'ray sphere intersection', 'line sphere intersection', 'sphere raytrace', or 'sphere raycast'. (The algorithm is straightforward and is well documented online.)

#6 Jack Smith   Members   -  Reputation: 100

Like
0Likes
Like

Posted 23 May 2011 - 02:13 PM


From what I understand, ray intersection functions provided by Direct3D (the graphics library I am using) work with polygons. So for example, my sphere would need to be made out of (say, triangles) to compute the intersection from the ray to the sphere.

Is this correct?

Nope; you can compute the intersection with the sphere directly (you don't have to use a mesh representation).

So if I know the radius of the sphere in its 3d space, and am given a 3d ray, how would I get these coordinates of intersection?

Google/search for (e.g.) 'ray sphere intersection', 'line sphere intersection', 'sphere raytrace', or 'sphere raycast'. (The algorithm is straightforward and is well documented online.)


Thank you so much! I have now come up with a function that gives me the Latitude and Longitude of where the user is pointing the cursor on the sphere. In case anyone is interested:

BOOL FindIntersection(POINT &curPos, D3DVIEWPORT9 &vpMap, D3DXMATRIX &matProjection, D3DXMATRIX &matView,
	D3DXMATRIX &matWorld, FLOAT &lat, FLOAT &lon)
{
	D3DXMATRIX m;
	D3DXVECTOR3 rayOrigin;
	D3DXVECTOR3 rayDir;
	D3DXVECTOR3 p1;
	D3DXVECTOR3 p2;
	D3DXVECTOR3 v1;
	D3DXVECTOR3 v2;
	D3DXVECTOR3 v;
	FLOAT determinant;
	FLOAT theta;
	FLOAT phi;
	FLOAT rho;
	FLOAT S;
	FLOAT a;
	FLOAT b;
	FLOAT c;
	FLOAT t1;
	FLOAT t2;
	bool bDoesIntersect;

	vpMap.Width=792;
	vpMap.Height=546;

	D3DXVECTOR3 inP1(curPos.x, curPos.y, 0.0f);
	D3DXVec3Unproject(&rayOrigin, &inP1, &vpMap, &matProjection, &matView, &matWorld);
	D3DXVECTOR3 inP2(curPos.x, curPos.y, 1.0f);
	D3DXVec3Unproject(&v2, &inP2, &vpMap, &matProjection, &matView, &matWorld);

	D3DXVec3Normalize(&rayDir, D3DXVec3Subtract(&v, &rayOrigin, &v2));

	// p = td + p0
	// a = d*d
	// b = 2d*(p0-pc)
	// c = (p0-pc)*(p0-pc)-r^2

	a = D3DXVec3Dot(&rayDir, &rayDir);
	b = D3DXVec3Dot(D3DXVec3Scale(&v, &rayDir, 2), &rayOrigin);
	c = D3DXVec3Dot(&rayOrigin, &rayOrigin) - pow(SPHERE_SIZE, 2);

	// Calculate determinant
	determinant = pow(b, 2) - 4*a*c;

	if (determinant >= 0)
	{
		// There is at least one point of intersection
		t1 = (-b + sqrt(determinant)) / (2*a);
		t2 = (-b - sqrt(determinant)) / (2*a);

		// Plug into p = td + p0 and solve for p
		D3DXVec3Add(&p1, D3DXVec3Scale(&v, &rayDir, t1), &rayOrigin);
		D3DXVec3Add(&p2, D3DXVec3Scale(&v, &rayDir, t2), &rayOrigin);

		a = sqrt(pow(p1.x-rayOrigin.x,2)+pow(p1.y-rayOrigin.y,2)+pow(p1.z-rayOrigin.z,2));
		b = sqrt(pow(p2.x-rayOrigin.x,2)+pow(p2.y-rayOrigin.y,2)+pow(p2.z-rayOrigin.z,2));

		if (a > b)
		{
			// p1 is closest to viewer
			v.x = p1.x;
			v.y = p1.y;
			v.z = p1.z;
		}
		else
		{
			// p2 is closest to viewer
			v.x = p2.x;
			v.y = p2.y;
			v.z = p2.z;
		}

		// Now convert from Cartesian to spherical coordinates
		rho = sqrt(pow(v.x,2)+pow(v.z,2)+pow(v.y,2));
		S = sqrt(pow(v.x,2)+pow(v.z,2));
		phi = acos(v.y/rho);
		if (v.x >= 0)
		{
			theta = asin(v.z/S);
		}
		else
		{
			theta = PI - asin(v.z/S);
		}

		if (theta >= PI)
		{
			theta -= 2*PI;
		}

		bDoesIntersect = true;
		lat = phi;
		lon = theta;
	}
	else
	{
		// Cursor does not intersect with sphere
		bDoesIntersect = false;
	}

	return bDoesIntersect;
}

Note that I have the vector 'v' has the y and z flipped. That is just how it works in my 3d world.

Once I get the lat/lon, which are returned as radians, I convert them to degrees and add the appropriate offset (in my case I subtract 90 degrees from the latitude, and also if longitude exceeds 180 I subtract 360 from it).

In any case the above code is specific to my software, but could be adapted to someone else's application if they needed it.

So now, regardless of zoom level and rotation, the user can see the latitude and longitude that they have the cursor pointed at. "no intersection" is displayed whenever the cursor hovers outside the sphere. So it works perfectly.

Thanks again!!! :lol:




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS