Get screen coordinates for different viewport?

Started by
2 comments, last by AlgorithmX2 17 years, 1 month ago
I made a program with several viewports, and I need to find the screen coordinates of an object I am drawing after I have drawn it (so I can test if it is inside or outside of a selection box). I am not using GL accelerated transformation (my modelview matrix is constant), but instead I use a custom 3x3 Matrix class and sets of 3 floating point vector ordinates. What I know: * The bounding box of the viewport that the objects were drawn in * The object's location in camera space * (can figure out) the angle and aspect ratio of the viewport (it's a square, but that might change) * Some weird number which has shown up and I had to calculate that apparently needs to be multiplied by to imitate ortho when I am really in perspective view. This is needed for when I convert my ortho coordinates to coordinates on the plane z = -1. The value is 0.8296875f (c++ float). What I need to know: * The screen x and y of the object This is for my Spherical RTS remake in c++, if you want to know. Which reminds me: After I get this part answered, I will need to know, from the same information, minus the stuff about specific objects' locations, how to go from a point on the screen to a point on a sphere of a certain radius with center at a certain distance that would be rendered to that point on the screen.
Signature go here.
Advertisement
If what you say is true then all you really need to do is transform the point your self, and then map it to the view port. Just multiply the vertex's center(or perhaps a few more to get more accurate selection) by your matrices then do your perspective divide, and map it to the view port. and poof ya should be good to go...


//To Camera SpaceCombine=Projection * ModelviewEx = Combine[0] * x + Combine[4] * y + Combine[8] * z + Combine[12];Ey = Combine[1] * x + Combine[5] * y + Combine[9] * z + Combine[13];Ez = Combine[2] * x + Combine[6] * y + Combine[10] * z + Combine[14];Ew = Combine[3] * x + Combine[7] * y + Combine[11] * z + Combine[15];//Perspective DivideP=Vector( Ex, Ey, Ez ) * ( 1.0 / Ew );//Map To Viewport Sizeint x = ( ( P.x + 1.0 ) / 2.0 ) * View_width;int y = ( ( P.y + 1.0 ) / 2.0 ) * View_height;//Map to Window, via offsetsif ( x < width && x > 0 )if ( y < height && y > 0 ){*Screen_Coordinate=Vector(x+View_Offset.x,y+View_Offset.y);return true;//Ok coordinates are correct, and its in the viewport!}//its not in the view port!return false;


[Edited by - AlgorithmX2 on March 8, 2007 8:17:44 PM]

I seek knowledge and to help others whom seek it
Unfortunately, there's a problem: my transformations are not done with a modelview and a projection matrix. Instead, I rotate the object space's point by the object's rotation 3x3, translate by the difference between the object's location and the camera's location, and rotate by the inverse of the camera's 3x3. I don't have a 4x4 matrix. The main thing I need to know is, how do I convert from what is already in camera space to screen space?

Here's what I've got:

float x, y, z;
static Mat3 invmat = Mat3();
//copy values of the specified matrix
invmat.Initialize(&map[mapbox->map]->cmat);
//invert the copied values
invmat.Invert();
//transform the position of the object by the inverted camera rotation matrix
invmat.Multiply(unit->x, unit->y, unit->z, &x, &y, &z);
//scale the location
x *= 10.0f;
y *= 10.0f;
z *= 10.0f;
//move it to finish the conversion to camera coordinates
z += map[mapbox->map]->viewdist + 10.0f;
//I know I have to divide x and -y by the distance to the point, but is that the distance in the forward direction or the magnitude of the difference vector b/w camera and object?
//This assumes in the forward direction, and disregards the x and y components, that might be the problem but I have tried several ways with no luck
float invmag = 1.0f / z;
//mapbox has x1, y1, x2, and y2 as the screen coordinates of its vertices (meaning the minimum possible would be (0, 0) and the maximum (in my case) is (1024, 768))
float realx = (x * invmag) * (mapbox->x2 - mapbox->x1) + mapbox->x1 * 0.5f + mapbox->x2 * 0.5f;
float realy = (-y * invmag) * (mapbox->y2 - mapbox->y1) + mapbox->y1 * 0.5f + mapbox->y2 * 0.5f;
//find the minimum and maximum ordinates for the place the mouse was pressed and the place it was released
float minx = (mdx < mx ? mdx : mx);
float maxx = (mdx < mx ? mx : mdx);
float miny = (mdy < my ? mdy : my);
float maxy = (mdy < my ? my : mdy);

//here is where the code to check if it's inside the selection went before i copied it into my browser
Signature go here.
Okay, Modelview and Projection matrix are just names for what they are used for, your just ignoring the fact that OpenGL has built in support for them and doing it yourself, I get that.

-What I was saying-
I'm not telling ya to use OpenGL's Matrices. I'm saying ya need to get the equivalent matrix of a combination of both, in other words one matrix that goes
from start to finish of the complete transformation and run the points through it.

That matrix is simply a 4x4 matrix of each operation you do, multiplied together.

-If we do things your way-
So, if we change this to your way, all ya really need to do after transforming to camera space is apply the remaining code that I showed up above.

If I remember correctly Ew is just

float f = ( 1.0f / tan( FocusRad ) ) / 2.0f;
Ew = f / Aspect;

I added comments to the above code to show what everything does, Hope ya can see how it all goes together now

I seek knowledge and to help others whom seek it

This topic is closed to new replies.

Advertisement