Why potatoeses

Started by
13 comments, last by alvaro 9 years, 8 months ago


Advertisement
Did you try to use Google to find an answer to your first two questions? Here's the first link I found: http://stackoverflow.com/questions/15693231/normalized-device-coordinates

I don't understand how ints could be used instead of floats in DirectX. floats have a huge range and similar relative errors across the range. If you were to use ints, how would you encode the normalized vector (0.707107f, 0.707107f, 0)?

 

About the "why" part, it's because it makes a lot of things substantially easier, not only conceptually but also the involved math. Plus, it is something that you kind of have to do anyway in order to bring everything on screen, but by normalizing (instead of projecting into, say, a 1024x768x1 coordinate system) you do not have to bother about the actual resolution any more.

As an example where it makes your life easier, take clipping. Finding out whether something is inside your viewing frustum is quite non-trivial work. Not like this can't be done, but it's 6 plane tests, which is considerable work compared to...

... having transformed everything to NDC, anything in [-1, +1] is inside, the rest is not.

You can convert coordinate by coordinate. If you want something that maps [-1,1] to [0,Width] linearly,

pixel_x = (ndc_x + 1) * Width / 2

Similarly,

pixel_y = (ndc_y + 1) * Height / 2

[EDIT: You may have to put a `-' in front of `ndc_y', depending on the conventions you are using.]

If you don't know how to deduce the inverse formulas, you should probably stop what you are doing and learn some basic algebra.
It's how the pipeline works. You transform vertices into NDC with some simple matrices that work independently of resolution. Screen space transformation has to happen after transformation to NDC. If you want to work in screen space, nothing stops you from doing so.

You can also render independently of resolution. For most of the rendering needs, the final resolution is irrelevant. You might render smaller or bigger than the real resolution for a variety of reasons. You can also render at different aspect ratios. You can render to viewports smaller than the actual screen. etc.

There's also supersampling and so on which cause rendering to happen at a different "resolution" than screen resolution.

I'd suggest picking up a copy of Frank Luna's D3D11 book. It explains the math, API, and basic techniques you need to get started as a graphics programmer.

Sean Middleditch – Game Systems Engineer – Join my team!


 
I chose [-1,1] instead of [0,1] because that's the typical range of NDC. If I remember correctly, OpenGL uses [-1,1] for all three axes, and DirectX uses [-1,1] for x and y and [0,1] for z.


As an example where it makes your life easier, take clipping. Finding out whether something is inside your viewing frustum is quite non-trivial work. Not like this can't be done, but it's 6 plane tests, which is considerable work compared to...
... having transformed everything to NDC, anything in [-1, +1] is inside, the rest is not.

Minor nitpick, but: you don't have to be in NDC to do simple clipping. After the projection transformation but before the perspective divide you can simply clip everything to [-w, w] (which, after the perspective divide, would be [-1, 1]). And doing clipping before going to NDC makes it so you can still linearly interpolate attributes to create the new verts.

Also, I feel a kinship with you because our (case insensitive) handles are only different by one character tongue.png


 

This topic is closed to new replies.

Advertisement