Sign in to follow this  

OpenGL Solved (without matrices!): Orthographic screen to viewport transformation

This topic is 3871 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Someone please help me before I lose my mind, go insane (or am I there already?), and start just randomly pressing buttons hoping the problem fixes itself. Like that ever works. :p Just for a little background, I'm writing a UI in OpenGL and I need to convert Windows screen coordinates to viewport coordinates. Yes, it's for the mouse. Traditionally, I would just use the Windows screen coordinates for my coordinate system for simplicity, but with the project I'm working on, it would make my life oh-so-much easier if I could set it up like a Cartesian coordinate system. So, given the Windows coordinate system: left = 0 right = n top = 0 bottom = n I want to convert any given point to the viewport: left = -(n/2.0f) right = n/2.0f top = n/2.0f bottom = -(n/2.0f) Ideally, I'd like to use a matrix. Before someone points me to it, I've already tried the orthographic projection matrix:
{
  { 2/(xmax-xmin),  0,              0,             -((xmax+xmin)/(xmax-xmin)) }
  { 0,              2/(ymax-ymin),  0,             -((ymax+ymin)/(ymax-ymin)) }
  { 0,              0,              2/(zmax-zmin), -((zmax+zmin)/(zmax-zmin)) }
  { 0,              0,              0,              1                         }
}
While I've not worked with them enough to be terribly efficient, I can see by just looking at it that it's going to give me something out of the necessary range (or have I already lost my mind??) Alternatively, I could hard code some math in to figure it out, but being the type of person I am, I'd rather have the correct solution rather than one that works for my specific case. Besides, I'd like to reuse this code in the future, so if I could set the viewport, building the matrix from that, and let it handle the transformation for me, that'd be ideal. So, if someone could help me, I'd be eternally grateful! [edit] I forgot to include the hard coded version that I'm using now, so here it is. Assuming a viewport of: VPleft: -100 VPright: 100 VPtop: 75 VPbottom: -75 VPwidth = 200 VPheight = 150 and a screen of: SCleft: 0 SCright: 1024 SCtop: 0 SCbottom: 768 VPx = VPleft + (x/SCwidth) * VPwidth; VPy = -(VPbottom + (y/SCheight) * VPheight); Unluckily, it only works for that particualar viewport... if I flip the Y, I have to manually change the VPy calculation to not negate the value. [/edit] Thanks all! vr [Edited by - virtuallyrandom on June 13, 2007 2:39:52 PM]

Share this post


Link to post
Share on other sites
Problem solved, and the implementation is simpler than I thought it would be, although a bit lengthy on the front end...

Given two viewports, one for the Screen and one for the Viewport, the formulas below calculate the correct X and Y coordinates for any Viewport.

Given a Screen:
ScreenLeft   = 0
ScreenRight = 1024
ScreenBottom = 768
ScreenTop = 0

Given a Viewport:
ViewportLeft   = -100
ViewportRight = 100
ViewportBottom = -75
ViewportTop = 75

Make the following calculations:
ScreenInvertX = ScreenRight < ScreenLeft ? -1 : 1
ScreenInvertY = ScreenTop < ScreenBottom ? -1 : 1
ScreenWidth = (ScreenRight - ScreenLeft) * ScreenInvertX
ScreenHeight = (ScreenTop - ScreenBottom) * ScreenInvertY
ScreenMinX = ScreenLeft < ScreenRight ? ScreenLeft : ScreenRight
ScreenMaxX = ScreenLeft < ScreenRight ? ScreenRight : ScreenLeft
ScreenMinY = ScreenTop < ScreenBottom ? ScreenTop : ScreenBottom
ScreenMaxY = ScreenTop < ScreenBottom ? ScreenBottom : ScreenTop

ViewportInvertX = ViewportRight < ViewportLeft ? -1 : 1
ViewportInvertY = ViewportTop < ViewportBottom ? -1 : 1
ViewportWidth = (ViewportRight - ViewportLeft) * ViewportInvertX
ViewportHeight = (ViewportTop - ViewportBottom) * ViewportInvertY
ViewportMinX = ViewportLeft < ViewportRight ? ViewportLeft : ViewportRight
ViewportMaxX = ViewportLeft < ViewportRight ? ViewportRight : ViewportLeft
ViewportMinY = ViewportTop < ViewportBottom ? ViewportTop : ViewportBottom
ViewportMaxY = ViewportTop < ViewportBottom ? ViewportBottom : ViewportTop

To convert a Screen coordinate to a Viewport coordinate:
F(ScreenX) = ScreenX * ((ScreenInvertX * ViewportInvertX) / (ScreenWidth / ViewportWidth)) + ViewportLeft
F(ScreenY) = ScreenY * ((ScreenInvertY * ViewportInvertY) / (ScreenHeight / ViewportHeight)) + ViewportTop

And to convert a Viewport coordinate back to a Screen coordinate
F(ViewportX) = (ViewportX - ViewportLeft) *
((ScreenInvertX * ViewportInvertX) / (ViewportWidth / ScreenWidth)) +
(ScreenMinX * ScreenInvertX * ViewportInvertX)

F(ViewportY) = (ViewportY - ViewportTop) *
((ScreenInvertY * ViewportInvertY) / (ViewportHeight / ScreenHeight)) +
(ScreenMinY * ScreenInvertY * ViewportInvertY)

Looks like a ton, huh? You can precalculate almost all of it and reduce it to one multiplication and one addition for each X and Y coordinate being converted from screen to viewport coordinates. Converting from viewport back to screen is almost as fast: one addition, one subtraction, and one multiplication.

Here comes the cool part (well, for us geeks, at least...) Using the screen and viewports defined above, I calculated:
ScreenInvertX = 1
ScreenInvertY = -1
ScreenWidth = 1024
ScreenHeight = 768
ScreenMinX = 0
ScreenMaxX = 1024
ScreenMinY = 0
ScreenMaxY = 768

ViewportInvertX = 1
ViewportInvertY = 1
ViewportWidth = 200
ViewportHeight = 150
ViewportMinX = -100
ViewportMaxX = 100
ViewportMinY = -75
ViewportMaxY = 75

Screen World Screen
For x= -10 -101.953125 -10
0 -100 0
256 -50 256
512 0 512
768 50 768
1024 100 1024
1034 101.953125 1034

For y= -10 76.953125 -10
0 75 0
192 37.5 192
384 0 384
576 -37.5 576
768 -75 768
778 -76.953125 778

For this viewport and screen, we expect the coordinate to move from the left to the right and top to bottom. Since it's a Cartesian coordinate system, we'd expect it to, and it apparently does, run from -x to +x and +y to -y.

So, to confirm that it works for more than just this test case, we need to run a representative sample of each viewport we could create. We just proved a normal Cartesian graph, so I'll skip that and we'll try these. For each of the tests, the test coordinate runs from left to right and top to bottom, so we should see a constant movement from left to right and top to bottom.
----------------------------------------
X-reversed, Y-normal Cartesian
Left: 10
Right: -10
Bottom: -10
Top: 10

Screen World Screen
For x= -10 10.1953125 -10
0 10 0
256 5 256
512 0 512
768 -5 768
1024 -10 1024
1034 -10.1953125 1034

For y= -10 10.26041667 -10
0 10 0
192 5 192
384 0 384
576 -5 576
768 -10 768
778 -10.26041667 778

Expected: 10 to -10, 10 to -10
Got: 10 to -10, 10 to -10

----------------------------------------
X-normal, Y-reversed Cartesian
Left: -10
Right: 10
Bottom: 10
Top: -10

Screen World Screen
For x= -10 -10.1953125 -10
0 -10 0
256 -5 256
512 0 512
768 5 768
1024 10 1024
1034 10.1953125 1034

For y= -10 -10.26041667 -10
0 -10 0
192 -5 192
384 0 384
576 5 576
768 10 768
778 10.26041667 778

Expected: -10 to 10, -10 to 10
Got: -10 to 10, -10 to 10

----------------------------------------
Windows
Left: 0
Right: 800
Bottom: 600
Top: 0

Screen World Screen
For x= -10 -7.8125 -10
0 0 0
256 200 256
512 400 512
768 600 768
1024 800 1024
1034 807.8125 1034

For y= -10 -7.8125 -10
0 0 0
192 150 192
384 300 384
576 450 576
768 600 768
778 607.8125 778

Expected: 0 to 800, 0 to 600
Got: 0 to 800, 0 to 600

----------------------------------------
Subset Cartesian block
Left: 20
Right: 30
Bottom: 40
Top: 50

Screen World Screen
For x= -10 19.90234375 -10
0 20 0
256 22.5 256
512 25 512
768 27.5 768
1024 30 1024
1034 30.09765625 1034

For y= -10 50.13020833 -10
0 50 0
192 47.5 192
384 45 384
576 42.5 576
768 40 768
778 39.86979167 778

Expected: 20 to 30, 50 to 40
Got: 20 to 30, 50 to 40

----------------------------------------
Reversed subset block
Left: 80
Right: 60
Bottom: 120
Top: 110

Screen World Screen
For x= -10 80.1953125 -10
0 80 0
256 75 256
512 70 512
768 65 768
1024 60 1024
1034 59.8046875 1034

For y= -10 109.8697917 -10
0 110 0
192 112.5 192
384 115 384
576 117.5 576
768 120 768
778 120.1302083 778

Expected: 80 to 60, 110 to 120
Got: 80 to 60, 110 to 120

----------------------------------------
And just for fun...
Extreme irregular block
Left: 100000
Right: 100001
Bottom: -1
Top: 2000

Screen World Screen
For x= -10 99999.99023 -10
0 100000 0
256 100000.25 256
512 100000.5 512
768 100000.75 768
1024 100001 1024
1034 100001.0098 1034

For y= -10 2026.054688 -10
0 2000 0
192 1499.75 192
384 999.5 384
576 499.25 576
768 -1 768
778 -27.0546875 778

Expected: 100000 to 100001, 2000 to -1
Got: 100000 to 100001, 2000 to -1

Groovy.

I had mentioned earlier how this could be brought down to one multiplication and one division. The easiest way (if you haven't figure it out yet) is to just precalculate when the viewport is assigned and store them for use later. The functions could resolve down to this (possibly smaller/faster, mathematicians?):
vp_mod_x = (ScreenInvertX * ViewportInvertX) / (ScreenWidth / ViewportWidth);
vp_mod_y = (ScreenInvertY * ViewportInvertY) / (ScreenHeight / ViewportHeight);

sc_mod_x = (ScreenInvertX * ViewportInvertX) / (ViewportWidth / ScreenWidth);
sc_mod_y = (ScreenInvertY * ViewportInvertY) / (ViewportHeight / ScreenHeight);
sc_add_x = ScreenMinX * ScreenInvertX * ViewportInvertX;
sc_add_y = ScreenMinY * ScreenInvertY * ViewportInvertY;

...

// get a screen x/y coordinate
ViewportX = ScreenX * vp_mod_x + ViewportLeft
ViewportX = ScreenY * vp_mod_y + ViewportTop

// get a viewport x/y coordinate
ScreenX = (ViewportX - ViewportLeft) * sc_mod_x + sc_add_x;
ScreenY = (ViewportY - ViewportTop) * sc_mod_y + sc_add_y;

So, umm... I guess that's it. If anyone has any problems, suggestions, corrections, etc., please let me know :)

vr

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now