Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

EbonySeraph

RWH Component

This topic is 6072 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

What the hell is it. I mean everything about it. I do know about the transformation matricies and what they do. But what does RWH have to do with a verticie? I always here soemthing general like "it''s where the verticie is after its tranformed" or soemthing of the sort. My question is, if its a new position why is it only one float? Someone please shed some light. I have no clue what RWH is. "Ogun''s Laughter Is No Joke!!!" - Ogun Kills On The Right, A Nigerian Poem.

Share this post


Link to post
Share on other sites
Advertisement
RHW not RWH...

Reciprocal of Homogenous W

In 3D to perform projections and translations with matrices, you use what is known as homogeneous coordinates. These are matrices with 4 columns rather than 3 (or rows depending on how you view matrices).

To transform coordinates, the process D3D follows is similar to the following:

1.
// the 3D input point (this could be the x,y,z of a vertex)
// this is in "model" or "object" space
VECTOR3 myInputPoint = {-10.5, 286.652, 199.0};

2.
// since we''ll be using homogenous space, add a W coordinate
// to the vertex and make it 4D
VECTOR4 myPoint;
myPoint.x = myInputPoint.x;
myPoint.y = myInputPoint.y;
myPoint.z = myInputPoint.z;
myPoint.w = 1.0f;

3.
// multiply the point by the world matrix
// (this moves the point from model space into world space)
VECTOR4 myPointInWorldSpace = myPoint * D3DTS_WORLD;

4.
// multiply the point by the view (camera) matrix
// (this moves the point from world space into camera space)
VECTOR4 myPointInCameraSpace = myPointInWorldSpace * D3DTS_VIEW;

5.
// multiply the point by the projection matrix
// this moves the point into _almost_ screenspace
VECTOR4 myFinalPoint = myPointInCameraSpace * D3DTS_PROJECTION;

6.
// a perspective projection from 3D to 2D requires a division by the Z coordinate
// (so that things which are further away from the viewer get smaller)
// Matrices only multiply and add so the divide is done with W

float screen_x = myFinalPoint.x / myFinalPoint.w;
float screen_y = myFinalPoint.y / myFinalPoint.w;


7.
A multiply can be faster than a divide and multiplying by the reciprocal does the same thing (for these purposes):

float rhw = 1.0f / mFinalPoint.w;
float screen_x = myFinalPoint.x * rhw;
float screen_y = myFinalPoint.y * rhw;
float z = myFinalPoint.z * rhw;

8.
After this point the point has the viewport applied to convert it into device coordinates (2D pixel positions). [I''ve deliberately left out a few steps such as lighting, concatenation of transforms, clipping etc since the question is purely about RHW].


9. The screen_x, screen_y, z and RHW get passed down to the rasteriser for each vertex.

The screen_x and screen_y tell the rasteriser where that vertex appears on the screen (it''s 2D address).

The z gets interpolated across the polygon and tested against and written to the Z buffer.

Finally the RHW is interpolated across the polygon and is used for perspective correct texturing, vertex shading and fogging.

--
Simon O''''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!