Sign in to follow this  
Mybowlcut

OpenGL Converting pixel size to a GL size

Recommended Posts

Hey. I have a pong ball that is drawn as GL_POINTS, with a call to glPointSize with 15 as the point size. I realised that my game boundary is 4 wide by 2 high, and the ball won't fit since it (the ball) is in pixels. Is there any correlation between pixel size and size in opengl? E.g. is there a number that I can use to scale between the two? Cheers.

Share this post


Link to post
Share on other sites
The relation between GL units and pixel units is exactly what the morelview matrix, projection matrix, perspective division and viewport transform does. You pass GL-units, and you end up with something on the screen in a well-defined way. So to get from GL units to pixels, multiply your coordinate by the modelview and projection matrices, divide by the w-compoenet of the resulting vector, and then do the viewport tarnsform by expanding the normalized device coordinates (range -1 to 1 on all axes) to the viewport (specified by the parameters to glViewport).

But perhaps you're looking at and/or treating the problem in the wrong way. Instead of finding the mapping between GL-units and screen coordinates, maybe you should setup the environment such that GL-units and screen coordinates are the same in the first place, and don't need any conversion at all?

glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

There, any coordinates passed to OpenGL will correspond exactly to the window coordinate. It applies when you draw filled geometries like triangles and quads. When you draw points and lines, add a small translation to either matrix; doesn't matter which one.

glTranslate(0.5, 0.5, 0);

Share this post


Link to post
Share on other sites
Quote:
Original post by Brother Bob
The relation between GL units and pixel units is exactly what the morelview matrix, projection matrix, perspective division and viewport transform does. You pass GL-units, and you end up with something on the screen in a well-defined way. So to get from GL units to pixels, multiply your coordinate by the modelview and projection matrices, divide by the w-compoenet of the resulting vector, and then do the viewport tarnsform by expanding the normalized device coordinates (range -1 to 1 on all axes) to the viewport (specified by the parameters to glViewport).

But perhaps you're looking at and/or treating the problem in the wrong way. Instead of finding the mapping between GL-units and screen coordinates, maybe you should setup the environment such that GL-units and screen coordinates are the same in the first place, and don't need any conversion at all?

glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

There, any coordinates passed to OpenGL will correspond exactly to the window coordinate. It applies when you draw filled geometries like triangles and quads. When you draw points and lines, add a small translation to either matrix; doesn't matter which one.

glTranslate(0.5, 0.5, 0);
Oops just realised that I posted in the wrong place! Haha.

Your first paragraph went straight over my head (I suck at OpenGL). I think your second suggestion sounds good, but I'm not sure if it's a good idea. Does it affect anything in a bad way? Why the small translation?

Cheers.

Share this post


Link to post
Share on other sites
I can't say whether it's a good or bad idea for you, because I don't know what you are doing and what you need. But if you want a direct mapping from GL-coordinates to window coordinates, then that code does exactly that.

The small translation is for pixel-perfect rendering. Since points and lines have different rules to filled geometry regarding what pixels are drawn, you need to compensate for the slight differences. Filled geometry specifies edges of the interior you want to fill, but points and lines are specified using center points. These differences must be compensated for.

Since I'm browsing mainly via the active-topic section, I didn't pay much attention to where it was posted. But since you mentioned it, I assume you wanted it in the OpenGL sub forum. Moving there then, since that's where it belongs.

Share this post


Link to post
Share on other sites
Well basically I want the pong ball to be as big as the point that represents it so that collision detection will look right and work correctly (which is what I'm supposed to do later on for the workshop).

Share this post


Link to post
Share on other sites
Sounds very much like you will benefit from working directly in screen space coordinates then. Is your 4x2 boundary really something that is required, or just some arbitrary size? If not, scale all coordinates and velocities such that they correspond to the screen size instead.

Share this post


Link to post
Share on other sites
Nah, it was just what fitted on the screen.

This is what the lecturer has supplied:
// Somewhere else, he defined these:
rendererWidth = GLsizei(640);
rendererHeight = GLsizei(480);
fieldOfViewAngle = 45.0f;
nearClippingPlane = 1.0f;
farClippingPlane = 200.0f;

bool RendererOpenGL::bindToWindow(HWND &windowHandle)
{
//set up pixel format, rendering context
// NOTE: this method uses 'wgl' commands - the MS Windows Operating system binding for OpenGL.
// it must be over-written when porting this renderer to another OS.

// Need to do 5 things before we can use OpenGL
// First - get the device context of the game window (ie. what is the window being shown on eg. graphics adapter)
// Second - set that device to some desired pixel format
// Third - create a rendering context for OpenGL (something OpenGL draws to and maps to the device)
// Fourth - make the rendering context 'current'
// Fifth - Set the size of the OpenGL window.

// First - get the device context of the game window
hWnd = windowHandle;
hDC = GetDC(hWnd); // get the device context of the window


// Second - set the device to some desired pixel format
// This is done be filling out a pixel format descriptor structure

static PIXELFORMATDESCRIPTOR pfd; // pixel format descriptor

// The pixel format discriptor has a lot of memembers (26) !
// luckily we dont need most of them and set them to zero
// we could go through the structure member by member and set them to zero
// but a shortcut way is to use memset to initialise everything to zero

memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR)); // sets all memmbers of pfd to 0

// now we change only the relevant pfd members
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.cColorBits = 16;
pfd.cDepthBits = 16;

// based on the descriptor, choose the closest supported pixel format.
int PixelFormat = ChoosePixelFormat(hDC, &pfd);
if (PixelFormat==0)
{
// error
MessageBox (NULL,"Could not choose pixel format","Error",MB_OK);
return (false);
}

// set the display device (device context) to the pixel format
if (SetPixelFormat(hDC, PixelFormat, &pfd)==0)
{
// error
MessageBox (NULL,"Could not set pixel format","Error",MB_OK);
return (false);
}



// Third - create rendering context

hRC = wglCreateContext(hDC); // windows dependent OpenGL function (wgl)
if (hRC==NULL)
{
MessageBox (NULL,"Could not create GL rendering context","Error",MB_OK);
return (false);
}

// Fourth - Make the rendering context current
if (!wglMakeCurrent(hDC, hRC))
{
MessageBox (NULL,"Could not make rendering context current","Error",MB_OK);
return (false);
}

// Fifth - set the size of the OpenGL window

/*
***** Note: this step is important, not setting an initial size
can cause the whole OS to crash (computer is re-set)
*/


RECT rect; // structure to store the coordinates of the 4 corners of the window
GetClientRect (hWnd, &rect); // put the window coordinates in the structure
ResizeCanvas(long(rect.right-rect.left), long(rect.bottom-rect.top));

return (true);
}

void RendererOpenGL::ResizeCanvas(long widthRequest, long heightRequest)
{
rendererWidth = (GLsizei)widthRequest;
rendererHeight = (GLsizei)heightRequest;
glViewport(0, 0, rendererWidth, rendererHeight);
setUpViewingFrustum();
}

void RendererOpenGL::setUpViewingFrustum() // set up the viewing volume
{
// Select projection matrix and reset it to identity, subsequent operations are then performed on this matrix (gluPerspective)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();

// set up the perspective of the window
GLdouble aspectRatio = (GLfloat)rendererWidth/(GLfloat)rendererHeight;
gluPerspective(fieldOfViewAngle, aspectRatio, nearClippingPlane, farClippingPlane);

// select the model-view matrix (to de-select the projection matrix) and reset it to identity
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}


The only difference between his and yours that I can see is that he calls gluPerspective instead. So, if I change that to glOrtho, scale the sizes of the game objects to pixel co-ordinates and do the small translations for points and lines, it should be all good? A translation for every point/line might get annoying though... being Pong, pretty much all of the game objects are points and lines haha.

By the way, how small can I go with the translations?

Cheers.

Share this post


Link to post
Share on other sites
Don't translate once for each line and point. Translate once. You don't set projection matrix and modelview matrix from scratch all the time for each point and line you draw, do you? Just stick it after the glLoadIdentity in the modelview matrix from my example.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this