Sign in to follow this  

The point of transformation functions?

This topic is 4342 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, I'm working on some 3D game programming (C++/OpenGL). I'm to the point where I can show some terrain and basic 3d models (I can move them around, etc). So far, I have been using the OpenGL transformation functions to position and orient my meshes. Now I am trying to do some things other than just display the models (mainly collision detection). The problem is that if I use the OpenGL functions to position and orient my meshes, how can I tell where the triangles are so that I can do collision detection and such? Does this mean that I have to write my own transformation functions, use them to move my meshes around, then do collision detection, then send them to OpenGL to render? In other words, I am currently doing this: Load Mesh -> push OpenGL matrix -> position mesh -> pop OpenGL matrix I am wondering if I have to do this (which seems like a waste of time): Load Mesh -> Manually transform mesh using my own functions -> send to OpenGL without any matrices (other than camera, etc) I hope I haven't been too confusing. This is hard for me to explain. Any feedback is appreciated!

Share this post


Link to post
Share on other sites
Yes, for coldet and other non-rendering-related functions that require it, you have to transform your own geometry. However, you shouldn't transform all the geometry yourself and send the transformed geometry to OpenGL because of this. For collision detection purposes you rarely need to transform all the geometry of an object. Early outs, bounding volumes and broad-phase culling should make per-triangle tests (if they are required at all) relatively rare events.

Let OpenGL transform the geometry for rendering, and then do whatever other transformation is required (which should be minimal under normal circumstances) yourself.

Share this post


Link to post
Share on other sites
You don't usually do collision detection directly on the meshes. But if you were, you'd probably do this:


Load Mesh -> push OpenGL matrix -> position mesh -> pop OpenGL matrix
Manually transform mesh using my own functions -> do collision detection

This is to limit the amount of data you need to transfer to the GPU each frame.

Share this post


Link to post
Share on other sites
Thanks for the fast replies, guys!

I just wanted to be sure that there wasn't a better way than to do my own transformations.

To put this more into context, I'm working on something with a map and objects (drivers, powerups, etc).

I understand the necessary intersection tests, and I understand how to draw the graphics using OpenGL. I'm just trying to "mash" the two together so that I can start working on a simple engine, or at least some experiments.

After I figured out basic intersection tests, I figured that I would precompute normals for the map. Then I realized that when I position and orient the mesh using OpenGL, the normals won't be valid anymore.

Now that I think about it more, the map will not be moving in the world. The geometry will only move when I apply the camera transformations, and that won't matter in the context of collision detection. All of the colliding objects will be simple bouding boxes or spheres.

So, I guess that I still can precompute the normals for the map mesh. Then the only geometry I will have to manually transform will be the bounding boxes. At that point, I will have world positions for the map triangles and the bounding boxes and I can do the intersection testing. Does that sound correct?

Share this post


Link to post
Share on other sites

Usually models have separated collision meshes which are simplified versions of the actual model. Of course, the model can present also the collision mesh and the renderable mesh.

Idea:

You need a matrix to transform the mesh from model space to the world space.
Using the inverse of that matrix will transform any vector from world space to the model space.

So, instead of performing intersection test in world Space, you can do it in the model space. This is beneficial at least in the case where you have lots of instances (like in forest) of the same model. Practically using this techique you don't need to transform any mesh to world space.

Of course, if you have for example an indoor scene, it might be better to create a collision mesh in world space.

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
Er... why wouldn't the normals be correct once you position and orient the map?


Because if I precompute the normals, then use OpenGL to transform the vertices, only OpenGL will know the true position of my vertices. So, I cannot compute new normals because I won't have the vertex positions.

The popular solution seems to be to have a seperate simple collision mesh, and perform transformations on it (and its pre computed normals). That way, I'll have access to the post-transformation positions and normals.

Share this post


Link to post
Share on other sites
In many genres, collision is done by a simple geometric primitive (rectangular prism, sphere, cylinder, capsule, etc) so that practically no transformations need to be done.

If those aren't good enough, another approach is to use a simplified bounding hull, which should work great if your models are mostly convex and is very easy to calculate (both to obtain it from the model and to intersect it with other bounding hulls).

The final option, as others have noted, is generally a low-detail version of the mesh itself, which is more difficult to work with but provides for collision detection to be as accurate as you want it to be.

Share this post


Link to post
Share on other sites
Quote:
Original post by andyic3
Quote:
Original post by Sneftel
Er... why wouldn't the normals be correct once you position and orient the map?


Because if I precompute the normals, then use OpenGL to transform the vertices, only OpenGL will know the true position of my vertices. So, I cannot compute new normals because I won't have the vertex positions.

If this were the case, then you won't be able to tell OGL how to transform your geometry, and how it looks like. You typically know the vertices in a kind of _local_ co-ordinate frame, and you know how the local frame is related to the global frame, so you know implicitely also the "true" position (what I assume you mean is in fact the _global_ position).

In the given application of collision detection of a simple B-volume with a height map, if you really want to test collision against the map itself, I suggest you to check whether to do it not in the global frame but in the local frame of the height map. So you have to transform the simple volume only but not the entire map. In many cases that would be much more efficient.

Share this post


Link to post
Share on other sites

This topic is 4342 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this