• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

115 Neutral

About keathmilligan

  • Rank
  1. Hi, my project targets mobile devices so I'm looking for ways to optimize it as much as possible. One area that I am particularly concerned about is ray-model intersection. Here is the approach I am currently using: Each model has a bounding box that is oriented with the model (OBB). The OBB is generated when the model is loaded and the points of the OBB are transformed when an intersect test needs to be performed (with logic to optimize not regenerating multiple times in a frame or not regenerating when the object position/transformation hasn't changed, etc.). When it comes time to do the intersect test:[list] [*]First, before transforming the OBB, I do a quick check to make sure the ray isn't headed in completely the opposite direction. If so, exit, no intersection. [*]Next the OBB is transformed if necessary. [*]An intersect test is performed on the sides of the OBB. If no intersection of the OBB, exit, no intersection. [*]If the OBB is intersected, then I loop through each face of the model: [*]Calculate a normal for the face. [*]If the face is pointing away from the vector, proceed to the next face. [*]If not, perform a Moller-Trumbore intersection test of the triangle - return TRUE if intersection found or proceed to next face if not. [/list] What could I do to optimize this, if anything, and still get similar results? Looping through each face is painful, but my models are fairly low polygon count so it's not [i]that[/i] bad. One capability I am adding is the ability to manually specify the OBB and have multiple OBBs for an object so they more closely follow its shape. Another thought is to simply return TRUE when the OBB is intersect when the object is far away without going through each face for a more precise detection.
  2. Ray/Triangle Intersection Difficulties

    Thanks - finally figured out what was going on - my edge vectors were normalized when they should not have been.
  3. I realize this topic is covered extensively, but I've looked at dozens of academic presentations and I can't seem to make any of them work. I am attempting to detect [b]a)[/b] whether a ray (poiint/direction) intersects with a given triangle and [b]b)[/b] where (in world coordinates). I adapted the following code from the lighthouse 3D article: [source lang="cpp"]int isx3d(Line3D line, Plane3D plane, Number3D *where) { Number3D e1 = vec3d(plane.v0, plane.v1); Number3D e2 = vec3d(plane.v0, plane.v2); Number3D h = cross3d(line.p1, e2); float a = dot3d(e1, h); if (a > -0.00001 && a < 0.00001) //(close_enough(a, 0.0f)) return FALSE; float f = 1.0f/a; Number3D s = vec3d(plane.v0, line.p0); float u = f*dot3d(s,h); if (u < 0.0f || u > 1.0f) return FALSE; Number3D q = cross3d(s, e1); float v = f*dot3d(line.p1,q); if (v < 0.0f || u+v > 1.0f) return FALSE; float t = f*dot3d(e2, q); if (t > 0.00001) { float w = 1.0f-(u+v); Number3D ww; ww.x = w*plane.v0.x+u*plane.v1.x+v*plane.v2.x; ww.y = w*plane.v0.y+u*plane.v1.y+v*plane.v2.y; ww.z = w*plane.v0.z+u*plane.v1.z+v*plane.v2.z; *where = ww; return TRUE; } return FALSE; } [/source] This code seems very similar to other examples, but it has a couple of problems: First, it works when the point of origin is on the negative side of the triangle, but when the point of origin is on the positive side, it can trigger a false positive result (as if it is considering the ray being cast negatively from the point of origin as well). Second, the resulting point is wrong. I have tried transforming this every way I can think of, but it either walks across the face of the triangle in a diagonal line (instead of horizontally if I just moving the point of of origin on the X/Y plane) or it moves along a curve(?).
  4. Looks like the JOGL version of gluUnProject is broken. I wrote my own version based on the C GLU source and it works. [code] private float[] unProject(float winX, float winY, float winZ, Matrix modelView, Matrix projection, int[] view) { float[] in = new float[4]; Matrix m = Matrix.multiply(modelView, projection).inverse(); in[0] = (winX-(float)view[0])/(float)view[2]*2f-1f; in[1] = (winY-(float)view[1])/(float)view[3]*2f-1f; in[2] = 2f*winZ-1f; in[3] = 1f; float[] out = new float[3]; out[0]=m.values[0]*in[0]+m.values[4]*in[1]+m.values[8]*in[2]+m.values[12]*in[3]; out[1]=m.values[1]*in[0]+m.values[5]*in[1]+m.values[9]*in[2]+m.values[13]*in[3]; out[2]=m.values[2]*in[0]+m.values[6]*in[1]+m.values[10]*in[2]+m.values[14]*in[3]; //out[3]=m.values[3]*in[0]+m.values[7]*in[1]+m.values[11]*in[2]+m.values[15]*in[3]; -- don't need return out; } [/code]
  5. The the values member of the Matrix class is just an array of 16 floats. This is Java, so I don't have GLFloat. Here is the same code without using that class: float[] p = new float[16]; float[] m = new float[16]; int[] v = new int[4]; gl.glGetFloatv(GL.GL_PROJECTION_MATRIX, p, 0); gl.glGetFloatv(GL.GL_MODELVIEW_MATRIX, m, 0); gl.glGetIntegerv(GL.GL_VIEWPORT, v, 0); float[] o = new float[3]; GLU.gluUnProject(100f, 100f, 0f, m, 0, p, 0, scene.camera.viewport, 0, o, 0); The results are the same. Like I said, gluProject, which also uses the projection and model view matrices, works great, so I don't think it is an issue with those values.
  6. Thanks, I've implemented the frustum extraction and I think I can get what I want out of it, but it looks like ti will be a lot more work. What I don't understand is why gluUnProject is not working. If I pass 0 as the window Z coordinate, I would expect to get a point on the near clip plane. For example, in this JOGL code: Matrix p = new Matrix(); Matrix m = new Matrix(); int[] v = new int[4]; gl.glGetFloatv(GL.GL_PROJECTION_MATRIX, p.values, 0); gl.glGetFloatv(GL.GL_MODELVIEW_MATRIX, m.values, 0); gl.glGetIntegerv(GL.GL_VIEWPORT, v, 0); float[] o = new float[3]; GLU.gluUnProject(100f, 100f, 0f, m.values, 0, p.values, 0, scene.camera.viewport, 0, o, 0); I'd expect this to give me a value close to the camera's position, instead I get wildly large values in o[0] and o[1]. It doesn't matter what I pass for win x, y. gluProject works.
  7. I am trying to determine the 3D worldspace coordinates of the 4 corners of the viewport. Such that if I inset them a little and drew lines connecting these coordinates, I would see a frame around the window. I think what I am looking for is essentially the worldspace coordinates of the near clip plane for whatever orientation the camera is currently in. I've tried using glUnProject, but I'm not getting the expected results. It is returning coordinates that are way larger than my scene. I've various values for the window Z coordinate, but it doesn't seem to make a difference.
  8. Object orientation from quaternion

    Thanks so much, guys - I got it working! I took the conjugate of my camera's quaternion and converted that into a matrix and then used that to rotate the object. Turns out my quaternion-to-euler function was working all along as well, so I got my angles to boot! Thanks again. edit: clarification: I don't use the angles for anything graphical, I use those for logic elsewhere in the game.
  9. Object orientation from quaternion

    [quote name='jyk' timestamp='1297044293' post='4770718'] How are you uploading your camera transform to OpenGL? (Or did you already answer that?) [/quote] I'm using gluLookAt to set up the camera. [quote name='jyk' timestamp='1297044293' post='4770718'] In any case, since you're using the fixed-function pipeline, you can upload any transform you want to OpenGL using glLoadMatrix*() or glMultMatrix*(). [/quote] Unfortunately, I don't know how to prepare the matrix. [quote name='karwosts' timestamp='1297044751' post='4770719'] I don't see why you have to do any math here at all? If you want your object to always be positioned right in front of the camera, than you just want to bypass the world/camera transform entirely, and draw your model directly in view space. I think you can draw your fixed object with an identity modelview matrix, maybe with one -z translation applied to push it out the distance you want. It looks to me that you're trying to come up with a matrix, that when multiplied with your camera matrix, gives you identity Note that I'm assuming you don't actually need the objects position for anything other than to draw it (that was the impression that I got). If you actually want to know the worldspace XYZ coordinates of this floating thing than yeah you'll have to do some mathematics to get it, but if you're just trying to draw something fixed on the screen, just bypass the view transform entirely. [/quote] Yeah, that's true - it does seem like I'm going to a lot of unnecessary trouble. If this was just a HUD or something like that I'd just do what you're suggesting, but this will be the starting point for things that do need to be in world-space. For example, a projectile like a missile. I need to first position and orient it with the camera and then it will go off on its own trajectory.
  10. Object orientation from quaternion

    Yeah, I suspected the answer might be something along those lines. Unfortunately, I don't know how to put it all together. i can convert my camera quaternion to a matrix, but where do I go from there? Currently the object is placed and oriented by calling glTranslatef followed by 3 successive calls to glRotatef. I'm not sure how to go about replacing that with a matrix.
  11. Object orientation from quaternion

    [color=#333333]I have a quaternion that represents my camera direction. Using this, I can update the camera's position and calculate the look-at coordinates and up vector. All this works great and I've got a nice demo of flying around in a 3D scene with control of heading, pitch and roll of the camera. Now, I want to place an object directly in front of the camera and keep its position and orientation locked in sync with the camera's. That is - it should just appear fixed in front of the camera. Using the camera's direction quaternion, I can correctly position the object, but I can't figure out how to calculate the Euler angles that I need to pass to glRotate in order to orient the object. I've tried numerous examples from euclidianspace.com, including the quaterion-to-euler conversion, but nothing is working. Depending on which direction I'm flying the object twists around in the wrong ways flips upside down or backwards. I really tried to do my homework before posting - I've read and tried dozens of different approaches, but I think I'm probably missing something fundamental. Even if there is some other way to orient the object, I really would like to be able to determine the Euler angles from the camera direction for other uses in the program. Here is the quaternion-to-euler code. I keep coming back to this because it seems like it should be what I'm looking for: [code]public static Number3d toEuler(Quaternion q) { float test = q.x*q.y + q.z*q.w; // handle singularities if (test > 0.4999f) return new Number3d( 2f*(float)Math.atan2(q.x,q.w)*r2d, (float)Math.PI/2*r2d, 0f); if (test < -0.4999f) return new Number3d( -2f*(float)Math.atan2(q.x,q.w)*r2d, (float)-Math.PI/2*r2d, 0f); float sqw = q.w*q.w; float sqx = q.x*q.x; float sqy = q.y*q.y; float sqz = q.z*q.z; return new Number3d( (float)Math.atan2(2*q.y*q.w-2*q.x*q.z , sqx - sqy - sqz + sqw)*r2d, (float)Math.asin(2*test)*r2d, (float)Math.atan2(2*q.x*q.w-2*q.y*q.z , -sqx + sqy - sqz + sqw)*r2d); }[/code] [/color]
  • Advertisement