Sign in to follow this  
Mybowlcut

OpenGL Converting pixel size to a GL size

Recommended Posts

Mybowlcut    176
Hey. I have a pong ball that is drawn as GL_POINTS, with a call to glPointSize with 15 as the point size. I realised that my game boundary is 4 wide by 2 high, and the ball won't fit since it (the ball) is in pixels. Is there any correlation between pixel size and size in opengl? E.g. is there a number that I can use to scale between the two? Cheers.

Share this post


Link to post
Share on other sites
Brother Bob    10344
The relation between GL units and pixel units is exactly what the morelview matrix, projection matrix, perspective division and viewport transform does. You pass GL-units, and you end up with something on the screen in a well-defined way. So to get from GL units to pixels, multiply your coordinate by the modelview and projection matrices, divide by the w-compoenet of the resulting vector, and then do the viewport tarnsform by expanding the normalized device coordinates (range -1 to 1 on all axes) to the viewport (specified by the parameters to glViewport).

But perhaps you're looking at and/or treating the problem in the wrong way. Instead of finding the mapping between GL-units and screen coordinates, maybe you should setup the environment such that GL-units and screen coordinates are the same in the first place, and don't need any conversion at all?

glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

There, any coordinates passed to OpenGL will correspond exactly to the window coordinate. It applies when you draw filled geometries like triangles and quads. When you draw points and lines, add a small translation to either matrix; doesn't matter which one.

glTranslate(0.5, 0.5, 0);

Share this post


Link to post
Share on other sites
Mybowlcut    176
Quote:
Original post by Brother Bob
The relation between GL units and pixel units is exactly what the morelview matrix, projection matrix, perspective division and viewport transform does. You pass GL-units, and you end up with something on the screen in a well-defined way. So to get from GL units to pixels, multiply your coordinate by the modelview and projection matrices, divide by the w-compoenet of the resulting vector, and then do the viewport tarnsform by expanding the normalized device coordinates (range -1 to 1 on all axes) to the viewport (specified by the parameters to glViewport).

But perhaps you're looking at and/or treating the problem in the wrong way. Instead of finding the mapping between GL-units and screen coordinates, maybe you should setup the environment such that GL-units and screen coordinates are the same in the first place, and don't need any conversion at all?

glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

There, any coordinates passed to OpenGL will correspond exactly to the window coordinate. It applies when you draw filled geometries like triangles and quads. When you draw points and lines, add a small translation to either matrix; doesn't matter which one.

glTranslate(0.5, 0.5, 0);
Oops just realised that I posted in the wrong place! Haha.

Your first paragraph went straight over my head (I suck at OpenGL). I think your second suggestion sounds good, but I'm not sure if it's a good idea. Does it affect anything in a bad way? Why the small translation?

Cheers.

Share this post


Link to post
Share on other sites
Brother Bob    10344
I can't say whether it's a good or bad idea for you, because I don't know what you are doing and what you need. But if you want a direct mapping from GL-coordinates to window coordinates, then that code does exactly that.

The small translation is for pixel-perfect rendering. Since points and lines have different rules to filled geometry regarding what pixels are drawn, you need to compensate for the slight differences. Filled geometry specifies edges of the interior you want to fill, but points and lines are specified using center points. These differences must be compensated for.

Since I'm browsing mainly via the active-topic section, I didn't pay much attention to where it was posted. But since you mentioned it, I assume you wanted it in the OpenGL sub forum. Moving there then, since that's where it belongs.

Share this post


Link to post
Share on other sites
Mybowlcut    176
Well basically I want the pong ball to be as big as the point that represents it so that collision detection will look right and work correctly (which is what I'm supposed to do later on for the workshop).

Share this post


Link to post
Share on other sites
Brother Bob    10344
Sounds very much like you will benefit from working directly in screen space coordinates then. Is your 4x2 boundary really something that is required, or just some arbitrary size? If not, scale all coordinates and velocities such that they correspond to the screen size instead.

Share this post


Link to post
Share on other sites
Mybowlcut    176
Nah, it was just what fitted on the screen.

This is what the lecturer has supplied:
// Somewhere else, he defined these:
rendererWidth = GLsizei(640);
rendererHeight = GLsizei(480);
fieldOfViewAngle = 45.0f;
nearClippingPlane = 1.0f;
farClippingPlane = 200.0f;

bool RendererOpenGL::bindToWindow(HWND &windowHandle)
{
//set up pixel format, rendering context
// NOTE: this method uses 'wgl' commands - the MS Windows Operating system binding for OpenGL.
// it must be over-written when porting this renderer to another OS.

// Need to do 5 things before we can use OpenGL
// First - get the device context of the game window (ie. what is the window being shown on eg. graphics adapter)
// Second - set that device to some desired pixel format
// Third - create a rendering context for OpenGL (something OpenGL draws to and maps to the device)
// Fourth - make the rendering context 'current'
// Fifth - Set the size of the OpenGL window.

// First - get the device context of the game window
hWnd = windowHandle;
hDC = GetDC(hWnd); // get the device context of the window


// Second - set the device to some desired pixel format
// This is done be filling out a pixel format descriptor structure

static PIXELFORMATDESCRIPTOR pfd; // pixel format descriptor

// The pixel format discriptor has a lot of memembers (26) !
// luckily we dont need most of them and set them to zero
// we could go through the structure member by member and set them to zero
// but a shortcut way is to use memset to initialise everything to zero

memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR)); // sets all memmbers of pfd to 0

// now we change only the relevant pfd members
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.cColorBits = 16;
pfd.cDepthBits = 16;

// based on the descriptor, choose the closest supported pixel format.
int PixelFormat = ChoosePixelFormat(hDC, &pfd);
if (PixelFormat==0)
{
// error
MessageBox (NULL,"Could not choose pixel format","Error",MB_OK);
return (false);
}

// set the display device (device context) to the pixel format
if (SetPixelFormat(hDC, PixelFormat, &pfd)==0)
{
// error
MessageBox (NULL,"Could not set pixel format","Error",MB_OK);
return (false);
}



// Third - create rendering context

hRC = wglCreateContext(hDC); // windows dependent OpenGL function (wgl)
if (hRC==NULL)
{
MessageBox (NULL,"Could not create GL rendering context","Error",MB_OK);
return (false);
}

// Fourth - Make the rendering context current
if (!wglMakeCurrent(hDC, hRC))
{
MessageBox (NULL,"Could not make rendering context current","Error",MB_OK);
return (false);
}

// Fifth - set the size of the OpenGL window

/*
***** Note: this step is important, not setting an initial size
can cause the whole OS to crash (computer is re-set)
*/


RECT rect; // structure to store the coordinates of the 4 corners of the window
GetClientRect (hWnd, &rect); // put the window coordinates in the structure
ResizeCanvas(long(rect.right-rect.left), long(rect.bottom-rect.top));

return (true);
}

void RendererOpenGL::ResizeCanvas(long widthRequest, long heightRequest)
{
rendererWidth = (GLsizei)widthRequest;
rendererHeight = (GLsizei)heightRequest;
glViewport(0, 0, rendererWidth, rendererHeight);
setUpViewingFrustum();
}

void RendererOpenGL::setUpViewingFrustum() // set up the viewing volume
{
// Select projection matrix and reset it to identity, subsequent operations are then performed on this matrix (gluPerspective)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();

// set up the perspective of the window
GLdouble aspectRatio = (GLfloat)rendererWidth/(GLfloat)rendererHeight;
gluPerspective(fieldOfViewAngle, aspectRatio, nearClippingPlane, farClippingPlane);

// select the model-view matrix (to de-select the projection matrix) and reset it to identity
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}


The only difference between his and yours that I can see is that he calls gluPerspective instead. So, if I change that to glOrtho, scale the sizes of the game objects to pixel co-ordinates and do the small translations for points and lines, it should be all good? A translation for every point/line might get annoying though... being Pong, pretty much all of the game objects are points and lines haha.

By the way, how small can I go with the translations?

Cheers.

Share this post


Link to post
Share on other sites
Brother Bob    10344
Don't translate once for each line and point. Translate once. You don't set projection matrix and modelview matrix from scratch all the time for each point and line you draw, do you? Just stick it after the glLoadIdentity in the modelview matrix from my example.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now