Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


4 Questions

This topic is 5827 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

1. What is the difference between D3DXVec3Transform D3DXVec3TransformCoord D3DXVec3TransformNorm why does the first take D3DXVECTOR4. What is the w used for in this vector struct? What are the other 2 for. 2. Whats an affine matrix. The last time i heard affine being refered to was texture mapping - if i recall correctly it meant the opposite to perspective texture mapping - i dont know how these apply to matrices. 3. How is the chrome done in Quake 3. I was just playing it yesterday when i noticed it. My guess is that it uses a secondary texture that moves differently - but how do you calculate this. A reference to chrome effects article would suffice. 4. How do you position multiple objects based on different matrixes - ie move one in one direction and rotate another - like all games are capable of doing. Has it got to do with changing the World or View matrixes every frame or must you change the vertices manually - ie GetVertexBuffer. I''ve been doing the latter for all my objects -is this slow - how is it normally done

Share this post

Link to post
Share on other sites
A good example of the separation between API (D3D/OpenGL/D3DX), mathematics and techniques and why 3D programming is about much more than learning an API. This stuff isn't trivial - getting an understanding of the maths helps a lot.

I'll answer these out of order because the explanation of the first question sort of depends on the answer to one of the later questions.

4. Unless the vertices change independently of each other each frame, you should use the WORLD transform matrix to apply a transformation to all the vertices in one go. This is the usual way to move and orient "rigid" objects. The WORLD transformation matrix is better thought of as the "OBJECT into WORLD transformation matrix", i.e. it specifies how the object is placed into the world.

2. [simplified explanation which skips some stuff] Think of an affine-matrix (in the context of D3DXVec3TransformNormal) as being a matrix which is only made from rotations, translations and uniform scales (x,y, and z scale by the same amount). If a transformation matrix (in the context of D3DXVec3TransformNormal) scaled, say x by 2, y by 1.5 and z by 8, then it wouldn't be affine.

1. Really this is two different questions which I'll answer in a few parts:

a) You can't describe all transformations with a 3x3 matrix. For example to be able to "project" in perspective, you need to be able to divide X and Y by the Z coordinate (so as Z increases, X and Y decrease). The common solution is to use what is known as "homogeneous" coordinates and matrices.
I'll spare you the maths details (beyond the scope of this post - you'll find plenty of reference online) but homogeneous coordinates add a 4th component called W, you then transform this 4D (homogeneous) coordinate by a 4x4 matrix to produce a 4D result.
For the purposes of 3D transformations to convert from 3D to homogeneous you simply (usually) set W to 1 and X,Y,Z are the same as the 3D version.
To convert back you divide each component by W:

X' = X/W
Y' = Y/W
Z' = Z/W

Incidentally, W is known as "Homogeneous W" and 1/W would be the "Reciprocal of Homogeneous W", RHW for short (which you may note from such places as D3DFVF_XYZRHW).

b) The only place where you usually actually need to do transformations requiring homogeneous space is perspective projection, but since D3D (and other APIs) concatenate (matrix multiply) the WORLD, VIEW and PROJECTION matrices into a single matrix it is needed. The division above always takes place in the D3D (and other APIs) pipeline (known as the "perspective divide" or "homogeneous divide"). Check out "Tutorial 3: Using Matrices" in the DirectX 8 SDK for more.

c) If you wanted to _manually_ transform the vertices (usually you don't - just set the transformation matrices and let D3D do the rest), you could use D3DXVec3Transform or D3DXVec3TransformCoord.

D3DXVec3Transform() transforms the vector (W=1) by the 4x4 matrix, but *does not* do the divide by W.

D3DXVec3TransformCoord() transforms the vector (and W=1) by the 4x4 matrix and *does* divide the X,Y,Z by W so the result is 3D again.

d) [D3D] Normals are special vectors which have a length of 1 and describe the orientation of something (rather than the position like vertex positions). For example a polygon normal points out from the polygon at a right angle, a normal for the top of a desk would point straight up. D3D (and other APIs) uses normals at each vertex to determine whether that vertex faces a light source and what the angle of incidence is.

e) If an object is rotated, you'd expect its normals to also rotate so that the lighting is correct for the new orientation. However since the normal is only meant to specify a direction, you wouldn't expect it to be translated, scaled in a non uniform way etc. For this reason, if the matrix used to transform the object is "non-affine", it can't be used to transform the normal without changing its direction (i.e. screwing it up).
Since you don't want any translation to be applied (i.e. really just transform by the top left 3x3 of the matrix) you have a different function (again assuming you want to do it manually), i.e. D3DXVec3TransformNormal().
[Do a search for "Ken Shoemake" if you want to see the maths of why you should use the inverse transpose to get rid of non-affine effects].

[EDIT: Ooops wrong Ken - do a search for "Ken Turkowski" instead. Search for Shoemake if you're interested in arcballs or quaternions ]

3. The key term is "Environment Mapping". Take a look at the samples in the SDK, the info in the documentation and on the web for more. Most types of envmapping look ok for chrome. Basically the effect is achieved by working out what direction a reflection off a surface would be at given a viewing direction and the surface normal. That reflection is then plugged into a texture coordinate transformation matrix and used to generate coordinates to look up the "chrome" map.

Simon O'Connor
Creative Asylum Ltd

[edited by - S1CA on November 7, 2002 9:35:33 AM]

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!