kamimail

Member
  • Content count

    43
  • Joined

  • Last visited

Community Reputation

126 Neutral

About kamimail

  • Rank
    Member
  1. The Problems about the FPS Camera

    #2       From Wikipedia (general case): For n greater than 2, multiplication of n×n rotation matrices is not commutative.   http://en.wikipedia.org/wiki/Rotation_matrix Hi, Ravnock, thanks for your reply. I don't know where is wrong in my above inference.hope someone can pinpoint it to me.
  2. Hi,every one,    I am a novice in the 3D math.currently , I am tortured by a problem about the rotation of a FPS camera.hope someone can help me .    please allow me to assume that the discussion is based on OpenGL conversion.    suppose the FPS camera is placed at the initial state in which the three axis of camera are aligned with axis of world frame.      I firstly pitch the camera around the x axis of world frame, and then yaw the camera around the y axis of world frame.so I can get the following equation(column-major) :     camera_orientation_matrix_1 = yaw_yworld_matrix * pitch_xworld_matrix =  rotateY(yaw_angle)  *  rotateX(pitch_angle);      some reference  informs me that I can get the same orientation by using the local rotation matrix but applied in a reverse order.    so I firstly yaw the camera with local y axis of camera, then pitch the camera around the local x axis of the camrea. ie: camera_orientation_matrix _2 = pitch_xlocal_matrix * yaw_ylocal_matrix = rotateX(pitch_angle) * rotateY(yaw_angle) ;      since the  camera_orientation_matrix_1 equals camera_orientation_matrix _2, I derive the following:         rotateY(yaw_angle)  *  rotateX(pitch_angle) === rotateX(pitch_angle) * rotateY(yaw_angle) ;   but the above equation is a fallacy,Is there anyone can tell me where is wrong? Thanks! 
  3. Hi all: I want to apply a global illumination method to a batch of 3D building models and save the lighting effect as another texture. I am novice in this filed , Is there any library or SDK(maya?) availble for me which can do the texture baking task in the batch?
  4. Hi,Álvaro: thank you for reply. could you tell me which classes or module in CGAL i should use since the CGAL is quite complex library,thanks!
  5. Hi,every one: I want to slice a triangular mesh with textures into several pieces by a plane. Is there any open source library that can do the job reliably? thanks for answering my question![img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  6. the precision problem in 16 bit Z-Buffer

    MJP ,MrRowl,Thanks for your reply. I don't really understand why the w-compensation method suggested by MrRowl will has negative effect on the rasterization.would someone can clarify it to me ? Thanks!
  7. the precision problem in 16 bit Z-Buffer

    Hi Ashaman73, Some browsers only provide the 16 bit depth buffer for WebGL application.
  8. Hi ,all: I am working at a project that aims to develop a virtual globe system by using WebGL. We have made the system work, but there is z-fighting problems. I have heard from others that there exists a solution that use a perspective projection matrix,a matrix different from the projection matrix used in the OpenGL,which can resolve/relieve the precision problem. Does anyone known the equation of the projection matrix? Thank you very much!
  9. Quaternion from latitude and longitude

    Thanks for all replies! Actually, I want to orient a camera in a virtual globe application using OpenGL.In this app, the globe camera is described by latitude ,longitude and azimuth .(the lat,lon are not the position of a camera,there are the viewing target of the camera.there are some other parameter such as the distance from looking target,tilt) At the beginning,the camera is aligned with world coordinate frame.(the viewing direction of camera is toward the world -Z axis.) When the user specifies lat,lon,azimuth angle ,the camera should be targeted to the specified lat,lon from space with some distance. Given a (longitude,latitude) as the viewing target of the camera,I firstly rotate the camera longitude degree around the local Z axis . Then ,the camera is rotated latitude degree around local X axis of camera. So the quaternion of orientation Q' = Qlat*Qlon. The angles in the Qlat and Qlon have been halved before construct the quaternion. I have used the Q' in the app, but the result is incorrect. Is there any problem in using quaternion to orient the camera described by latitude and longitude? I wish some one can pinpoint the problem rest in the method used by me. Thank you very much!
  10. Hi ,all, I want to get a quaternion from the latitude and longitude. could any one hell me how to construct the quaternion? the coordinate frame details: [size=4][font=arial, helvetica, sans-serif] given that t[color=#000000][background=rgb(249, 249, 249)]he Z axis points to the north pole. The X axis points to the intersection of the prime [/background][/color][color=#000000][background=rgb(249, 249, 249)]meridian and the equator, in the equatorial plane. The Y axis completes a right-handed [/background][/color][color=#000000]coordinate system, and is 90 degrees east of the X axis and also in the equatorial plane.[/color][color=#000000][background=rgb(249, 249, 249)] [/background][/color][/font][/size] it seems that I can get the quaternion from multiplication from QLat=(sin(lat),0,0,cos(lat)) and QLon=(0,0,sin(lon),cos(lon)), ie Quat = Qlat*Qlon,but the result is incorrect when applying it to the camera.Is there any mistake ? Thanks you very much!
  11. Hi,0x3a,szecs, Thank you for your rapid replay! so ,it seems that there should exist two planes for sketch,one for ground, and other one for screen-aligned plane which is always oriented towards the user.. after the establishment of these planes,any position on screen will be mapped into a vertex on one of the planes by the ray passing through mouse point on screen. It is a really good suggestion, but Is this approach commonly used by softwares?
  12. Hi, all: I am involved in a project which needs a 3D sketching system,just like google's SketchUp. It is evident that a mouse position on the screen has numerous mapping points in 3D world space. How to get the position of point in 3D when I am trying to draw the vertices in a polyline on the screen,Does anyone know the conventional approach to handle this problem?Thanks!
  13. Hi, My project need a robust implementation of 3D boolean operations for the complex geometry,which is composed of meshs(triangles,triangle strips). Is there any open source library availble for my purpose? Thanks in advanced!
  14. Hi,I am a novice in the real- time rendering,I have a question about the large-scale data rendering. How to handle the situation where the rendering data(geometry + texture) in a frame is larger than memory in GPU? If I can partition the entire data into a series of smaller packages and commit the packages into pipeline one by one by using only specific buffer in GPU, the problem may be resolved. However, this approach lead to another problem: How can I guarantee that one package in the buffer has been rendered(has passed the pipeline) so that current contents of the buffer can be discarded and be updated with next package ? Is there any OpenGL command available for this purpose or is there other solutions which can solve my question? Thanks!
  15. Hi,I am a novice in the real- time rendering,I have a question about the large-scale data rendering. How to handle the situation where the rendering data(geometry + texture) in a frame is larger than memory in GPU? If I can partition the entire data into a series of smaller packages and commit the packages into pipeline one by one by using only specific buffer in GPU, the problem may be resolved. However, this approach lead to another problem: How can I guarantee that one package in the buffer has been rendered(has passed the pipeline) so that current contents of the buffer can be discarded and be updated with next package ? Is there any OpenGL command available for this purpose or is there other solutions which can solve my question? Thanks!