World space coordinates

Started by
4 comments, last by Mort 22 years, 8 months ago
I need to get my model space coordinates into world space coordinates, so that I can check for collision detection between the surfaces. Is there any better way of doing it, using DirectX 7, than to calculate all the coordinates by hand ? I think it would be a waste of processing power to first calculate the coordinates by hand and then let DirectX calculate the very same coordinates, with the one diffrence of the projection matrix added to the equation. The only way I could think of, was to use the ProcessVertices function, but unfortunately the projection mastrix HAS to also be applied to the vertices, when that is used.
- Mort
Advertisement
I wouldn''t mind an answer to that question myself!
I can''t be the first one asking this question. Are there really no answers to this ? What do the rest of you do for accurate collision detection ?
- Mort
Please! I need an answer to this question!
Well for you guys who also wanted an answer to this question, I guess you are as disapointed as me.

It seems that since vertices are being processed using a single combined matrix, there is no way to acquire the world space coordinates without calculating them again.

Calculating the world space coordinates can be done by hand (Let me know if you want to know how) or I guess you can load the identity matrix in camera and projection matrices and then call ProcessVertices.
- Mort
You need to think about what D3D does on a hardware T&L device - it doesn''t transform anything, neither does the driver - your world matrix, view matrix and projection matrix (plus/minus a few concatenations and inverses for some stuff) and the vertices get sent straight to the hardware.
Hardware T&L is ***one-way*** it takes stuff in and displays graphics out. On those devices (which *all* new cards available will be within a year) the post transform vertices aren''t available and never will be.
On top of that the vertices are often transformed straight from object space into camera space (even with software vertex processing - it makes more sense - not much in graphics actually *needs* world space). Final problem (going back to hardware) is that most new devices coming out have some form of tesselation available so the number of vertices transformed over the bus on the card is much higher than the amount you submit.

Generally for non-displayed geometry (used for physics [collision and its response], occlusion, 3D audio etc) you don''t want to be using your high detail models - as poly counts go through the roof its pretty pointless doing collision etc on say an eyelash (one of our highest end next gen models has them!) - you should be looking at ways to have a low poly character (say 1000 polys or less) used for collision while the high poly one is what is seen). Related to that, much of the data being pushed around the buses only applies to graphical output (texture coordinate sets, diffuse colours, texture space bases etc).



--
Simon O''''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

This topic is closed to new replies.

Advertisement