Jump to content
  • Advertisement
Sign in to follow this  
Danicco

OpenGL Mapping OpenGL Coordinates to Screen Pixels

This topic is 1789 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How can I map the coordinates from an OpenGL window to a window's pixels?

 

For example, say I've created a 1920x1080 window, and I want to draw a certain object exactly at the pixel 360, how can I get the OpenGL coordinate value?

 

Also, is this even a feasible scenario? I'm coding the coordinate positioning of a bunch of objects and I want to add this option so I can, for example, do something like positioning a player's menu bar at the pixel 10 of the screen, and his inventory icon at the (width - 10).

 

Edit: Modern OpenGL, so I don't want to use glOrtho().

Edited by Danicco

Share this post


Link to post
Share on other sites
Advertisement

You can construct your own ortho matrix and load it, then it's just a simple matter of multiplying input position by ortho matrix in your vertex shader.  The documentation page for glOrtho tells you how to construct the matrix (scroll down) or you can use a matrix library to do it for you.

Share this post


Link to post
Share on other sites

If you don't want to switch from a perspective projection to a orthographic projection for the drawing of 2d elements you'll need to crunch the numbers in your projection matrix to generate a transform that converts screen coordinates to view space coordinates.  Note that it doesn't make a lot of sense to use a perspective transform to render 2d screen elements with pixel perfect alignment.  If you are dead set on a perspective transform...

// Transforming a view space point (vx, vy, vz, 1) into clip space looks like this:
|sx  0  0  0||vx|   |sx * vx     |
| 0 sy  0  0||vy| = |sx * vy     |
| 0  0 sz tz||vz|   |sz * vz + tz|
| 0  0 -1  0|| 1|   |-vz         |

// The corresponding NCD values are computed as follows:
|nx|   | (sx * vx)      / -vz |
|nx| = | (sy * vy)      / -vz |
|nz|   | (sz * vz + tz) / -vz |

// You want to go from NDC (nx, ny, nz) to view space (vx, vy, vz)
// Fortunately this is pretty simple...
vz = -tz / (nz - sz)  <-- Evaluate this first!
vx = (-vz * nx) / sx
vy = (-vz * ny) / sy
// Note that (sx, sy, sz, tz) were all pulled from your perspective transform.

Converting a pixel coordinate to NDC is a simple matter scaling and biasing one interval into another...

Range of x values in NDC:  [-1, 1]  (1 is the right of the screen and -1 is the left)

Range of y values in NDC:  [-1, 1]  (1 is the top of the screen and -1 is the bottom)

Range of x values in pixels [0, ScreenWidth-1]

Range of y values in pixels: [0, ScreenHeight-1]

 

I'm sure you can figure that transform out yourself. smile.png

Edited by nonoptimalrobot

Share this post


Link to post
Share on other sites

...forgot two things.

 

1) Here is how a perspective transform is constructed for OpenGL:  http://www.songho.ca/opengl/gl_projectionmatrix.html.  You will want to consult this while diagnosing bugs in your math.

 

2) The way UV coordinates are used to address pixels in a texture is not always the same as the way NDC values are used to address pixels on the screen.  I believe DirectX 11 finally made these two addressing modes consistent so the interval [-1, 1] addresses pixels on screen in the exact same way the interval [0, 1] addresses pixels in an identically sized texture.  I'm not sure what iteration of OpenGL fixed this inconsistency if it happened at all.  Search for "mapping texels to pixels" to figure out how to rectify the different addressing modes for whatever version of OpenGL you are using.

Share this post


Link to post
Share on other sites

make sure your glViewport() is your screen resolution,

 

then, use glOrtho() to make the projection matrix fit the screen, then all your vertices will be in pixels.

Share this post


Link to post
Share on other sites


The way UV coordinates are used to address pixels in a texture is not always the same as the way NDC values are used to address pixels on the screen.  I believe DirectX 11 finally made these two addressing modes consistent so the interval [-1, 1] addresses pixels on screen in the exact same way the interval [0, 1] addresses pixels in an identically sized texture.  I'm not sure what iteration of OpenGL fixed this inconsistency if it happened at all. 

In a sensible API (DX10 / DX11 / GL), texture coordinates and screen coordinates should work the same way, except that NDC is from [-1,1] and textures from [0,1]

uv = ndc * 0.5 + 0.5;

pixel_Index = clamp( round(uv * num_Pixels - 0.5), 0, num_Pixels-1 );

 

On D3D9, the definition of pixel coordinates is stupidly shifted so that the centre of the top-left pixel lines up perfectly with the top-left edge of the screen. i.e. all the pixels are shifted by half a pixel in that direction, so you need:

uv = ndc * 0.5 + 0.5 + 0.5/num_Pixels;

 

GL's only stupidity in this regard is that z also ranges from -1 to +1, instead of from 0 to 1, which has no impact in this situation wink.png


Also, is this even a feasible scenario? I'm coding the coordinate positioning of a bunch of objects and I want to add this option so I can, for example, do something like positioning a player's menu bar at the pixel 10 of the screen, and his inventory icon at the (width - 10).
Ignoring projection matrices, the screen is addressed in NDC (normalized device coordinates), which range from -1 to 1.

i.e. a vertex at x=-1 will be on the left hand edge of the screen, and a vertex at x=1 will be on the right hand edge of the screen. 

 

Say the screen is 1280 pixels wide -- that's pixel #0 to pixel #1279.

The left edge of pixel #0 corresponds to an NDC value of -1. The right edge of pixel #1279 corresponds to an NDC value of -1 (this is also the left edge of imaginary pixel #1280).

 

If you want a shape to cover the pixels from #10 to #20, first calculate the size of a pixel. NDC is 2 units across, but our "pixel" coordinates are 1280 units across. Therefore one pixel is 2/1280 NDC units wide.

The left edge is -1, and we want to the coordinates to a point 10 pixels right of that, and then another 10 pixels right.

p1 = -1 + 2/1280 * 10

p2 = -1 + 2/1280 * 20

 

If you use an ortho matrix, it will just be doing this translation (by -1) and scaling (by 2/1280) for you cool.png

Share this post


Link to post
Share on other sites

On D3D9, the definition of pixel coordinates is stupidly shifted so that the centre of the top-left pixel lines up perfectly with the top-left edge of the screen. i.e. all the pixels are shifted by half a pixel in that direction, so you need:

uv = ndc * 0.5 + 0.5 + 0.5/num_Pixels;

 

GL's only stupidity in this regard is that z also ranges from -1 to +1, instead of from 0 to 1, which has no impact in this situation wink.png

 

OpenGL never had this problem!?  Sigh.  I've been using the wrong API all these years...

Share this post


Link to post
Share on other sites

If you want a shape to cover the pixels from #10 to #20, first calculate the size of a pixel. NDC is 2 units across, but our "pixel" coordinates are 1280 units across. Therefore one pixel is 2/1280 NDC units wide.

The left edge is -1, and we want to the coordinates to a point 10 pixels right of that, and then another 10 pixels right.

p1 = -1 + 2/1280 * 10

p2 = -1 + 2/1280 * 20

 

If you use an ortho matrix, it will just be doing this translation (by -1) and scaling (by 2/1280) for you cool.png

 

That's what I did, it seemed way easier than dealing with matrices again (ugh!).

 

I have to recalculate the pixel size every time the screen size changes, but that's minor.

It took me some time and tries to get it correctly though, I even had some ifs to check the region portion of the screen before I noticed I just had to subtract the value from 1...

 

What my question was about is, I'd like to know how developers deal with images and different screen ratios/sizes.

For example, if they do this sort of calculation to make the image appear exactly as the original resource (same pixel W * H) or adjust it to the screen to show, for example, between space -1 to 0.5 (25% of the screen).

 

I think I've seen some games that when I change the resolution to something unusual the images do appear distorted as well, so I wasn't sure if showing the image's exact pixel size might cause some trouble later on.

 

Anyway, thank you very much for the replies, with this my UI code is nearly finished!

Edited by Danicco

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!