Collision detection and response

Started by
15 comments, last by Aardvajk 8 years, 10 months ago
As has been pointed out, it would be highly unusual for a system to use the same data for rendering and for physics.

If you are planning to do all of you collision detection based on triangles, it is still highly probable you want a simplified set of triangles for collision than the set you use for rendering.

But most rigid body physics systems won't use triangle meshes as the primary physics objects. Most define shapes like boxes, cones, spheres and so on which are defined implicitly and not represented as a mesh in any way at all.

It is really the job of the content creation pipeline to create and manage the level data in terms of renderables and physics shapes and keep them in sync. Most systems that actually use this data would treat the two as separate things.
Advertisement

I don't know what are "subdivided convex hulls", but do you mean that your physics objects are always approximations?

Consider this awesome scene from WoW:

wowscrnshot_011112_172555.jpg

In the game, the bent tree will be totally climbable and 100% accurate. Are you telling me that in your engine it's not gonna be the same?

In the game, the bent tree will be totally climbable and 100% accurate. Are you telling me that in your engine it's not gonna be the same?

Yes. In my game the tree would be represented by some polyhedrons that formed a similar shape to the mesh.

It is quite possible WoW is generating their physics data from the graphics meshes in some way, but it is still certain that the stored data for the physics will look very different to the data for the renderable mesh.

How do you "compress" mesh data to a more friendly set of vertices? Is it something you do (automatically?) in run-time or is it embedded in the mesh file?

There's no one particular way this is done. It is quite possible WoW is using the same vertices for defining collision planes as for rendering the world, don't want to give the impression I'm claiming I know different.

Personally, because my physics is based on convex solids rather than meshes, I have facility to add shapes in my level editor, then I generate both a list of shapes and a set of renderable meshes which are stored separately in the level file.

I would imagine something like WoW is using a height map for the ground and generating triangles to render based on this data, but things like trees are probably separate meshes that also have some kind of physical shape associated with them in the content creation pipeline but I'm only speculating.

I don't know what are "subdivided convex hulls", but do you mean that your physics objects are always approximations?

smaller convex sets -> better mesh approximation -> better collision culling -> less instructions

By the way, if you search the web, I believe people have reverse engineered the WoW level files and posted the description of their contents. Should be possible to identify exactly how this stuff is implemented there if you are curious.

This topic is closed to new replies.

Advertisement