Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 1 more developer from Canada and 12 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


Member Since 01 Mar 2010
Offline Last Active Jan 09 2015 09:52 PM

Posts I've Made

In Topic: The Problems about the FPS Camera

21 March 2013 - 05:37 AM




Hi,every one,

   I am a novice in the 3D math.currently , I am tortured by a problem about the rotation of a FPS camera.hope someone can help me .

   please allow me to assume that the discussion is based on OpenGL conversion.

   suppose the FPS camera is placed at the initial state in which the three axis of camera are aligned with axis of world frame.


   I firstly pitch the camera around the x axis of world frame, and then yaw the camera around the y axis of world frame.so I can get the following equation(column-major) : 

   camera_orientation_matrix_1 = yaw_yworld_matrix * pitch_xworld_matrix =  rotateY(yaw_angle)  *  rotateX(pitch_angle);


   some reference  informed me that I can get the same orientation by using the local rotation matrix but applied in a reverse order.

   so I firstly yaw the camera with local y axis of camera, then pitch the camera around the local x axis of the camrea.

ie: camera_orientation_matrix _2 = pitch_xlocal_matrix * yaw_ylocal_matrix = rotateX(pitch_angle) * rotateY(yaw_angle) ;


   since the  camera_orientation_matrix_1 equals camera_orientation_matrix _2, I derive the following:

        rotateY(yaw_angle)  *  rotateX(pitch_angle) === rotateX(pitch_angle) * rotateY(yaw_angle) ;

  but the above equation is a fallacy,Is there anyone can tell me where is wrong? Thanks! smile.png


From Wikipedia (general case): For n greater than 2, multiplication of n×n rotation matrices is not commutative.



Hi, Ravnock, thanks for your reply.

I don't know where is wrong in my above inference.hope someone can pinpoint it to me.

In Topic: the library computing intersection between a plane and a mesh

09 November 2012 - 12:47 AM

thank you for reply.
could you tell me which classes or module in CGAL i should use since the CGAL is quite complex library,thanks!

In Topic: the precision problem in 16 bit Z-Buffer

14 September 2012 - 08:05 PM

MJP ,MrRowl,Thanks for your reply.
I don't really understand why the w-compensation method suggested by MrRowl will has negative effect on the rasterization.would someone can clarify it to me ? Thanks!

In Topic: the precision problem in 16 bit Z-Buffer

14 September 2012 - 12:00 AM

Hi Ashaman73,
Some browsers only provide the 16 bit depth buffer for WebGL application.

In Topic: Quaternion from latitude and longitude

16 June 2012 - 02:39 AM

Thanks for all replies!

Actually, I want to orient a camera in a virtual globe application using OpenGL.In this app, the globe camera is described by latitude ,longitude and azimuth .(the lat,lon are not the position of a camera,there are the viewing target of the camera.there are some other parameter such as the distance from looking target,tilt)

At the beginning,the camera is aligned with world coordinate frame.(the viewing direction of camera is toward the world -Z axis.)

When the user specifies lat,lon,azimuth angle ,the camera should be targeted to the specified lat,lon from space with some distance.

Given a (longitude,latitude) as the viewing target of the camera,I firstly rotate the camera longitude degree around the local Z axis .

Then ,the camera is rotated latitude degree around local X axis of camera.

So the quaternion of orientation Q' = Qlat*Qlon. The angles in the Qlat and Qlon have been halved before construct the quaternion.

I have used the Q' in the app, but the result is incorrect.

Is there any problem in using quaternion to orient the camera described by latitude and longitude? I wish some one can pinpoint the problem rest in the method used by me.

Thank you very much!