Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
Posted 21 March 2006 - 06:38 AM
Posted 21 March 2006 - 11:26 PM
Quote:
User puts Left and Right fingers down about 300 pixels apart in the center of the screen. User drags right finger towards the top of the screen. That would cause a counter-clockwise rotation about the camera's z vector as well as translating the camera towards the globe (and I think a positive x axis post-translation, but it's hard to envision correctly).
as in:
[ cosθ -sinθ 0 0 ]
R_{z} =[ sinθ cosθ 0 0 ]
[ 0 0 1 0 ]
[ 0 0 0 1 ]
Quote:
User puts Left and Right fingers down about 300 pixels apart in the center of the screen. User drags both fingers away from each other (left finger -x, right finger +x). This would result in the camera translating towards the globe (zoom in). The general concept is that whatever lat/long coordinates the user clicks on remain under the respective fingers until "mouse up". This encompasses rotation and translation. It is very easy to approximate this behavior using an arbitrary value to rotate the proper direction, and translate the proper direction, but I am looking for an exact result.
Posted 22 March 2006 - 01:40 AM
Quote:
Original post by someusername
This can be achieved by multiplication of the view matrix V by the Z-rotation
Quote:
Original post by someusername
This is not "well defined".
Imagine the user touching two locations on the globe, which map onto the same horizontal line very close to the top of the monitor; now imagine that the user moves his fingers away from each other on the same parallel line. You want both points to remain under his fingers.
The problem is that it wouldn't be long that these points should map outside the screen (pixel Y<0), since the effect you want is merely zoom-in.
However, the user keeps both his fingers along the same parallel line very close to the top of the monitor, which effectively constrains the globe-point to remain on the very same line too.
Isn't there an ambiguity here?
Posted 22 March 2006 - 07:11 AM
Quote:
Original post by mfawcett Quote:
Original post by someusername
This can be achieved by multiplication of the view matrix V by the Z-rotation
I think there also needs to be a post translation involved. For instance, if the user puts the Left Finger and Right Finger down near the top of the globe (300 pixels apart) and doesn't move his Left Finger, and sort of draws a circle around it with his Right Finger, the rotation should happen about the Left Finger, not the center of the screen.
, where dy = LF.y*ProjectionMatrix(2,2). It may need to be divided by 2, I'm not 100% sure.
[ 1 0 0 0 ]
[ 0 1 0 dy ]
[ 0 0 1 0 ]
[ 0 0 0 1 ]
Quote:
Original post by mfawcett
If he moves his Right Finger up and his Left Finger down, I think that would rotate about the center point between the fingers at twice the rate.
Quote:
Original post by mfawcett
The result you describe would happen if I were to merely translate the camera down its z-vector, but that is basically zooming in to the center.
Quote:
User drags both fingers away from each other (left finger -x, right finger +x). This would result in the camera translating towards the globe (zoom in)
Posted 22 March 2006 - 07:34 AM
Quote:
Original post by someusername
Ah, indeed. If you want the rotation to be performed around LF
Quote:
Original post by someusername
Isn't it more intuitive to zoom in, in such a case? like when moving the fingers apart horizontally?
Quote:
Original post by someusername
I thought you *wanted* to translate the camera along its Z in this case, since you posted
Quote:
Original post by someusername
Let me give this some more thought, and I'll get back...
Quote:
Original post by someusername
Are you planning on implementing this?
What *is* this "input system" about, anyway?
Posted 22 March 2006 - 09:26 AM
Posted 22 March 2006 - 10:30 AM
// UV texture space / vertex winding order
// _________
// v1|1 4|
// | |
// | |
// | |
// v0|2_______3|
// u0 u1
#ifndef POINT_3
#define POINT_3
#include <cmath>
class point_3
{
public:
float x, y, z;
point_3(void);
point_3(const float &x, const float &y, const float &z);
point_3 operator-(void);
void zero(void);
void normalize(void);
void rotate_x(const float &radians);
void rotate_y(const float &radians);
};
#endif
#include "point_3.h"
point_3::point_3(void)
{
zero();
}
point_3::point_3(const float &x, const float &y, const float &z)
{
this->x = x;
this->y = y;
this->z = z;
}
point_3 point_3::operator-(void)
{
return point_3(-this->x, -this->y, -this->z);
}
void point_3::zero(void)
{
x = y = z = 0.0f;
}
void point_3::normalize(void)
{
float len = sqrt(x*x + y*y + z*z);
x /= len;
y /= len;
z /= len;
}
void point_3::rotate_x(const float &radians)
{
float t_y = y;
y = t_y*cos(radians) + z*sin(radians);
z = t_y*-sin(radians) + z*cos(radians);
}
void point_3::rotate_y(const float &radians)
{
float t_x = x;
x = t_x*cos(radians) + z*-sin(radians);
z = t_x*sin(radians) + z*cos(radians);
}
#ifndef uv_rig
#define uv_rig
// uv_rig.h::Fig. 1
//
// UV camera rig
//
// latitude: | longitude: | radius: |
// *_*_ | ___ | ___ |
// */ \ | / \ | / \ |
// u: *| x | | v: |**x**| | w: | x**| |
// *\___/ | \___/ | \___/ |
// * * | | |
//
// where u { }
//
#include "point_3.h"
#include <cfloat>
#define PI_HALF (1.5707963267f)
#define PI (3.1415926535f)
#define PI_2 (6.2831853071f)
#define RAD_TO_DEG_COEFFICIENT (57.2957795f)
class uv_rig
{
protected:
float u; // -PI_HALF ... PI_HALF
float v; // 0 ... PI_2
float w;
int ip_width;
int ip_height;
float ip_fov;
public:
// eye location, after rotation*translation
point_3 eye;
// look_at unit vector, after rotation
point_3 look_at;
// up and right unit vectors, after rotation
point_3 up;
point_3 right;
// image plane look-at unit vectors, after rotation
point_3 u0v1, u0v0, u1v0, u1v1;
public:
uv_rig(void);
void SetUVW(const float u, const float v, const float w);
void GetUVW(float &u, float &v, float &w) const;
void SetImagePlaneConfig(const int &width, const int &height, const float &fov);
void GetImagePlaneConfig(int &width, int &height, float &fov) const;
inline float GetIPFOV(void){ return ip_fov; }
inline const int GetIPWidth(void){ return ip_width; }
inline const int GetIPHeight(void){ return ip_height; }
inline float GetU(void){ return u; }
inline float GetV(void){ return v; }
inline float GetW(void){ return w; }
protected:
void Transform(void);
void Reset(void);
void Rotate(void);
void Translate(void);
void ConstructImagePlane(void);
};
#endif
#include "uv_rig.h"
uv_rig::uv_rig(void)
{
u = v = 0.0f;
w = 5.0f;
ip_width = 1;
ip_height = 1;
ip_fov = PI/4.0f; // 1/8th of a circle
Transform();
}
void uv_rig::SetUVW(const float u_radians, const float v_radians, const float w_units)
{
u = u_radians;
v = v_radians;
w = w_units;
static float gimbal_lock_buffer = FLT_EPSILON * 1E3;
if(u < -PI_HALF + gimbal_lock_buffer)
u = -PI_HALF + gimbal_lock_buffer;
else if(u > PI_HALF - gimbal_lock_buffer)
u = PI_HALF - gimbal_lock_buffer;
while(v < 0.0f)
v += PI_2;
while(v > PI_2)
v -= PI_2;
if(w < 0.0f)
w = 0.0f;
else if(w > 10000.0f)
w = 10000.0f;
Transform();
}
void uv_rig::GetUVW(float &u_radians, float &v_radians, float &w_units) const
{
u_radians = u;
v_radians = v;
w_units = w;
}
void uv_rig::SetImagePlaneConfig(const int &width, const int &height, const float &fov)
{
if(width < 1)
ip_width = 1;
else
ip_width = width;
if(height < 1)
ip_height = 1;
else
ip_height = height;
if(fov < 1.0f)
ip_fov = 1.0f;
else if(fov > PI_2 - 1.0f)
ip_fov = PI_2 - 1.0f;
Transform();
}
void uv_rig::GetImagePlaneConfig(int &width, int &height, float &fov) const
{
width = ip_width;
height = ip_height;
fov = ip_fov;
}
void uv_rig::Transform(void)
{
Reset();
Rotate();
Translate();
}
void uv_rig::Reset(void)
{
eye.zero();
look_at.zero();
up.zero();
right.zero();
// eye.x += translate_u;
// eye.y += translate_v;
look_at.z = -1.0f;
up.y = 1.0f;
right.x = 1.0f;
ConstructImagePlane();
}
void uv_rig::Rotate(void)
{
// rotate about the world x axis
look_at.rotate_x(u);
up.rotate_x(u);
u0v1.rotate_x(u);
u0v0.rotate_x(u);
u1v0.rotate_x(u);
u1v1.rotate_x(u);
// rotate about the world y axis
look_at.rotate_y(v);
up.rotate_y(v);
right.rotate_y(v);
u0v1.rotate_y(v);
u0v0.rotate_y(v);
u1v0.rotate_y(v);
u1v1.rotate_y(v);
}
void uv_rig::Translate(void)
{
// place the eye directly across the sphere from the look-at vector's "tip",
// then scale the sphere radius by w
eye.x = -look_at.x*w;
eye.y = -look_at.y*w;
eye.z = -look_at.z*w;
look_at.x = 0.0;
look_at.y = 0.0;
look_at.z = 0.0;
}
void uv_rig::ConstructImagePlane(void)
{
// uv_rig.cpp::ConstructImagePlane::Fig. 1
//
// split the frustum down the middle using a plane that is parallel to the shorter sides
//
// ___a___________
// |\ | /|
// | \ | / |
// | \ R| / |b
// |______\ /______|
//
// R = field of view / 2.0 (radians)
// a = tan® (units)
// b = a * s/l; (units)
// s = shortest side (pixels)
// l = longest side (pixels)
float ip_half_w = 0.0f;
float ip_half_h = 0.0f;
if(ip_width >= ip_height)
{
ip_half_w = tan(0.5f*ip_fov*(static_cast<float>(ip_width - 1)/static_cast<float>(ip_width)));
ip_half_h = ip_half_w*(static_cast<float>(ip_height)/static_cast<float>(ip_width));
}
else
{
ip_half_h = tan(0.5f*ip_fov*(static_cast<float>(ip_height - 1)/static_cast<float>(ip_height)));
ip_half_w = ip_half_h*(static_cast<float>(ip_width)/static_cast<float>(ip_height));
}
u0v1.x = -ip_half_w;
u0v1.y = ip_half_h;
u0v1.z = look_at.z;
u0v0.x = -ip_half_w;
u0v0.y = -ip_half_h;
u0v0.z = look_at.z;
u1v0.x = ip_half_w;
u1v0.y = -ip_half_h;
u1v0.z = look_at.z;
u1v1.x = ip_half_w;
u1v1.y = ip_half_h;
u1v1.z = look_at.z;
}
Posted 22 March 2006 - 12:02 PM
Quote:
Original post my mfawcett
If the LF is stationary that is the case. What if both move? Do you envision the rotation happening about the midpoint between LF and RF?
Quote:
Original post my mfawcett
After some time using the touch table, you start to realize that what you really want is the points under your fingers to remain under your fingers.
Quote:
Original post my mfawcett
Let R = rotation((RF1 - LF1), (RF2 - LF2))
Let Cd = magnitude©
Let Sf = mag(RF2 - LF2) / mag(RF1 - LF1)
Let I1 = the intersection of C's z-vector and the globe
Let VLF = I1 - LF2
VLF /= Sf
Rotate VLF by inverse®
Let I2 = VLF + LF1
C's new position = I2 * (Cd / Sf) ?
C's new z-vector = normalize(I2)
C's new rotation = R
Posted 23 March 2006 - 01:33 AM
// I assume V_{0} is the view matrix when the user first started rotating, V is the current view matrix, P is the projection matrix and φ is the total angle to rotate by. SPivotX, SPivotY are assumed to be the "screen-space" coordinates of the pivot point, expressed in {-1,1}, increasing to the right and upwards.
// find the pivot point
dx = SPivotX*P(1,1)/2
dy = SPivotY*P(2,2)/2
dz = z_{near}
// Calculate the translation to the pivot point. (dx, dy, z_{near}) are the camera-space coordinates of the pivot.
[ 1 0 0 -dx ]
T = [ 0 1 0 -dy ]
[ 0 0 1 -dz ]
[ 0 0 0 1 ]
// The rotation matrix
[cos(φ) -sin(φ) 0 0 ]
R_{Z} = [sin(φ) cos(φ) 0 0 ]
[ 0 0 1 0 ]
[ 0 0 0 1 ]
// Calculate the new view matrix
V = T^{-1}*R_{Z}*T*V_{0}
// We always perform on V_{0}, so it's not *that* necessary to re-orthogonalize the view matrix, but it wouldn't hurt...
// V_{0} is the original view matrix, V is the current view matrix, P is the projection matrix. SPivotX, SPivotY are assumed to be the "screen-space" coordinates of the pivot point (midpoint of the two fingers), expressed in {-1,1}, increasing to the right and upwards. Sf is the scale factor, and should be given by the initial distance of the fingers, divided by their current distance. Sx, Sy, Sz will be the scaled position of the camera.
// find the pivot point
dx = SPivotX*P(1,1)/2
dy = SPivotY*P(2,2)/2
dz = z_{near}
// The camera position in that frame is -(dx, dy, z_{near})
// Scale the position of the camera to the desired ratio. (ignore the minus sign above)
Sx = Sf*dx
Sy = Sf*dy
Sz = Sf*dz
// Hardcode the new position in the view matrix. v1, v2, v3 are the vectors-rows of the view matrix. (the camera's local axes) Ignore the 4th component
V(1,4) = - dot{ (Sx,Sy,Sz), v1 }
V(2,4) = - dot{ (Sx,Sy,Sz), v2 }
V(3,4) = - dot{ (Sx,Sy,Sz), v3 }
// That's it. The view matrix should be ready.
, but I can't guarantee that. It seems like it should work though...
[ 1 0 0 0 ]
[ 0 1 0 0 ]
[ 0 0 1 0 ]
[ 0 0 0 1/Sf ]
Quote:
V(4,1) = - dot{ (Sx,Sy,Sz), v1 }
V(4,2) = - dot{ (Sx,Sy,Sz), v2 }
V(4,3) = - dot{ (Sx,Sy,Sz), v3 }
Posted 23 March 2006 - 04:54 AM
Quote:
Original post by someusername
I don't know whether I'll be able to help you anymore with this, but if you have any questions/feedback, you know where to post :)
Posted 23 March 2006 - 05:22 AM
Quote:
I don't know whether I'll be able to help you anymore with this, [...]
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
GameDev.net™, the GameDev.net logo, and GDNet™ are trademarks of GameDev.net, LLC.