Jump to content
  • Advertisement
Sign in to follow this  
Anddos

Drawing a quad in normalized device coordinates

This topic is 1427 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

can anyone explain what normalized device coordinates are?, is it the same idea as RHW for vertices when you set the FVF?

 

heres a picture of the radar rendering

http://i.imgur.com/Osq4Bml.png?1

// Viewport is entire texture.
    D3DVIEWPORT9 vp = { 0, 0, 256, 256, 0.0f, 1.0f };
    mRadarMap = new DrawableTex2D(256, 256, 0, D3DFMT_X8R8G8B8, true, D3DFMT_D24X8, vp, mAutoGenMips);
    HR(gd3dDevice->CreateVertexBuffer(6*sizeof(VertexPT), D3DUSAGE_WRITEONLY,
        0, D3DPOOL_MANAGED, &mRadarVB, 0));

    // Radar quad takes up quadrant IV.  Note that we specify coordinate directly in
    // normalized device coordinates.  I.e., world, view, projection matrices are all
    // identity.
    VertexPT* v = 0;
    HR(mRadarVB->Lock(0, 0, (void**)&v, 0));
    v[0] = VertexPT(0.0f, 0.0f, 0.0f, 0.0f, 0.0f);
    v[1] = VertexPT(1.0f, 0.0f, 0.0f, 1.0f, 0.0f);
    v[2] = VertexPT(0.0f, -1.0f, 0.0f, 0.0f, 1.0f);
    v[3] = VertexPT(0.0f, -1.0f, 0.0f, 0.0f, 1.0f);
    v[4] = VertexPT(1.0f, 0.0f, 0.0f, 1.0f, 0.0f);
    v[5] = VertexPT(1.0f, -1.0f, 0.0f, 1.0f, 1.0f);
    HR(mRadarVB->Unlock());
Edited by Anddos

Share this post


Link to post
Share on other sites
Advertisement

Normalized device coordinates are set up such that (-1, -1) is the bottom left of the screen, and (1, 1) is the top right. It's a little different than RHW, because with that you use pixel coordinates (top right of the screen is (ScreenWidth, ScreenHeight). 

Share this post


Link to post
Share on other sites

the rendering pipeline has 4 frames of reference, and three transform matrices that translate between them:

 

object space (object local coords) -> world transform matrix -> world space (world coords) - > view transform matrix -> camera space (normalized coords) -> projection transform matrix -> screen space (screen coords).

 

they're defining a quad that's in camera space, so then they can simply apply the projection matrix and it always displays in the same spot on the screen, irregardless of the world and view transforms. 

 

d3dxsprites is probably a better way to do it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!