• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About draculr

  • Rank
  1. Get near and far plane values?

    Quote:Original post by haegarr However, the inverse of the VIEW matrix is identical to the local-to-global transformation of the camera, and you said that you have "the position, direction, up and right vector" at hand, so you have all 4 columns of the matrix already. Why fetching them once again? I am still unsure how exactly I can get the inverse of the view matrix using the camera data that I have. Do you have any examples of that? Once I have that, do I just multiply the 8 frustum points I have against that matrix? Or do I need to do that multiplication against some other values?
  2. Get near and far plane values?

    Quote:Original post by Katie After - it's the function that allows unproject to do the work of turning screen->world coordinates. Quote:Original post by haegarr With these values e.g. the upper left far corner is [ -wfar, hfar, far ] in view space. This still needs to be multiplied by the inverse of the VIEW matrix, of course, to have the point in global space. I believe this was my problem.. I tried what Katie suggested and it does seem to give me the correct frustum values, however the farplane is tied to my cameras far plane and I need to define a shorter one. Using the trigonometric approach, I can get my 8 points, however my problem is that I am not multiplying them by the inverse of the view matrix (by that you mean the modelview matrix?) What is the best way of doing that? Do I need to use gluUnProject? Or can I just get the inverse of the modelview matrix (can I get that directly from opengl?) and multiply that against each of my 8 points?
  3. Get near and far plane values?

    I do have my camera information easily accessible (the position, direction, up and right vector, fov, screen ratio and near and far distances). I have tried getting my frustum using the following tutorial: http://www.lighthouse3d.com/opengl/viewfrustum/ But it does not seem to work. I also was confused about whether I need to do things before or after I do my gluLookAt() call?
  4. I have been stumped on this for a while, so I was wondering if anyone could help. Is there a way to get the 8 xyz vertices (in world space) that make up the camera frustum? As in, the 4 vertices making up the near plane and the 4 vertices making up the far plane?
  5. OpenGL Near and Far planes...

    I was looking at that as well, but how do you get the x,y,z coordinates for the edge points of the near and far planes? That only returns a single floating point value
  6. Hey, I want the coordinates of the edge points of my near and far planes... I have been looking at the following tutorial: http://www.lighthouse3d.com/opengl/viewfrustum/index.php?gimp They do the following: void FrustumG::setCamDef(Vec3 &p, Vec3 &l, Vec3 &u) { Vec3 dir,nc,fc,X,Y,Z; // compute the Z axis of camera // this axis points in the opposite direction from // the looking direction Z = p - l; Z.normalize(); // X axis of camera with given "up" vector and Z axis X = u * Z; X.normalize(); // the real "up" vector is the cross product of Z and X Y = Z * X; // compute the centers of the near and far planes nc = p - Z * nearD; fc = p - Z * farD; ... I do that and draw the near and far planes with GL_LINES and I can see my near plane in front of me.... to make matters worse if i spin the camera around the near plane stays exactly where it is... What is wrong with that code? Is it not taking into account something? Just for extra information, my code is as follows: Vector3 position = camera->GetPosition(); Vector3 direction = camera->GetViewVector(); direction.normalize(); Vector3 upVector = camera->GetUpVector(); Vector3 rightVector = camera->GetRightVector(); GLfloat screenRatio = camera->GetScreenRatio(); GLfloat fov = camera->GetFieldOfView(); GLfloat nearDist = camera->GetNearDistance(); // get the near and far frustum plane's width and height GLfloat tang = tan(fov * ANG2RAD); GLfloat nearHeight = 2 * tan(fov / 2) * nearDist; GLfloat nearWidth = nearHeight * screenRatio; GLfloat farHeight = 2 * tan(fov / 2) * FARDIST; GLfloat farWidth = farHeight * screenRatio; Vector3 zAxis = position - direction; zAxis.normalize(); Vector3 xAxis = upVector * zAxis; xAxis.normalize(); Vector3 yAxis = zAxis * xAxis; Vector3 nc = position - zAxis * nearDist; Vector3 fc = position - zAxis * FARDIST;
  7. Hey, I need the smallest and largest x and z values from my frustum (between the near clipping plane I have set in my perspective view, and an arbitrary far clipping plane)... I was wondering what the most efficient way would be to get that data? Thanks
  8. Thanks for the responses. I guess I shouldn't worry about it much for now then :)
  9. So, right now I am just dumping lit polygons on screen using vertex arrays (with normal and uv data). So when pushing out 50,000 polygons I am getting 550fps, 100,000 polygons nets me 280fps. (not rendering back faces) This is running on a core2duo running at 3.2ghz, 320mb 8800gts and 2gb RAM. Now, I have no idea if that is good or bad. Obviously it is a fairly high fps, but the system is powerful as well. 100,000 polygons isnt that much for a modern game scene is it? so once I start adding shaders, soft shadows, particle effects... seems like my fps would be blown. So, how can you tell if that is a good fps figure?
  10. Proper rain - how feasible?

    Nik02, that water accumulation idea would be great! I wonder how much effort would be needed to get that implemented. The nvidia demo seems to do it properly, looks like they are rendering ~5million particles.
  11. Proper rain - how feasible?

    You may notice they never transition from indoor to outdoor in the ATI Toyshop demo. That is because they are not actually using individual particles with collision detection instead using a scrolling texture (IIRC). I mean, the effect looks awesome and makes for a very nice looking demo. However, if you had any indoor/outdoor combined areas you couldn't use it properly in a game. That is an interesting write up Nik02, I'll take a look into it.
  12. I am wanting to develop a rain system. Basically it would have at least two levels of detail. The closer particles would be proper shaded objects, not sure exactly what type to use yet, but they would be lit and would have collision detection so as to make correct splashes on the ground, ripples on water etc. and also to prevent the particles from falling through into undercover areas. As you go a little further away, the particles are rendered as perhaps streaks, and at some point in the distance you would stop running any collision detection. I would want this to run along with a decently detailed animated scene with stencil shadowing (soft shadows would be nice!), water shaders, reflections from wet surfaces, at least bump mapping etc. The target system for this to run on would be something quite high end by todays standards (ie. core2duo, nvidia 8800, 2gb RAM... its a project not a commercial application and as long as it runs on my system it is fine). So, I was wondering if such a project would be feasible, would it be able to run properly in real-time with a sufficient amount of particles to simulate heavy rainfall? Obviously how it runs depends on how well it is optimized, but I was hoping someone here could let me know if it is possible at all to begin with, since I have not really seen anyone implement proper rain before in real-time. Most implementations seem to just have a scrolling texture, so you cannot transition properly between indoor and outdoor areas. Thanks.
  • Advertisement