• Advertisement

3D Why the transform between normalized device and viewport coordinates is different between OpenGL and Direct3D?

Recommended Posts

According to https://en.wikibooks.org/wiki/GLSL_Programming/Vertex_Transformations and https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glViewport.xml, the relation between normalized device coordinates (NDC) and viewport coordinates in OpenGL is

y_vp = (y_ndc + 1) * height / 2

For Direct3D however, it is said in section 16.1 of "Introduction to 3D Game Programming with DirectX 11" by F. D. Luna (https://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228) that the relation is

y_vp = (1 - y_ndc) * height / 2

 

I could not figure out the reason of this difference. Could someone kindly give me some explanation?

Share this post


Link to post
Share on other sites
Advertisement
On 11/02/2018 at 1:30 AM, leonard2012 said:

I could not figure out the reason of this difference. Could someone kindly give me some explanation?

There's endless different mathematical conventions for different things, most of which are equally valid and arbitrary choices. Historically, D3D and GL have disagreed with which way is up (do coordinates start at the top of the screen and increase downward, or start at the bottom and increase upwards?), and back in the early versions whether you should write vectors across the page or down the page... Most of the time you should be able to use whatever conventions you like at your application-level, and then do a bit of mathematical fiddling to translate that into D3D/GL's conventions. e.g. You could choose to have viewpoints be defined from the top corner or bottom corner, have Y be up or Z be up, use right-handed or left-handed coordinates, have NDC Z=1 be the near plane or far plane, use row-vectos or column-vectors, store 2D arrays in row-major or column-major order, etc.... 

Their NDC definitions also vary in Z: D3D uses a 0 to 1 range and GL uses a -1 to 1 range (by default - GL4 allows changing it). This is one case where there is an objective reason for the choice though.  GL's convention here has a nice symmetry to it, but D3Ds offers precision benefits due to how floating point numbers work, allowing much better z-buffer precision. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Nikolay Dimchev
      Hello, Game Devs! 
      I am a 3D Modeler and Texture Artist currently looking for freelance work.
      If you're interested please feel free to checkout my portfolio at the link below.
      Contact me with details at nickeydimchev3d@gmail.com!
      ~Nick
      nickeydimchev3d.myportfolio.com
       
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement