Sign in to follow this  
Gluc0se

OpenGL Questions about OpenGL Screen Space (in pixels)

Recommended Posts

Gluc0se    146
So I am creating a software implementation of OpenGL and I have two questions I hope to have answered? After clipping a line in homogeneous space to the [-1, 1] canonical view volume, I transform each vertex with a viewport matrix(0, 0, 500, 500). As a result I get x,y values that range from 0.0 to 500.0. Now I am rasterizing these pixels to a 500x500 pixel framebuffer, so my accepted pixel indices are [0-499]. So the problem I have is that any vertex that I clipped to 1.0 will now map to 500 and be 'outside' the range of my array of pixels. Question 1: Any advice for how to correct this off-by-one-like issue? The other problem is that my implementation doesn't line up on a per-pixel-basis with OpenGL's. Currently I have two ways of drawing my lines. After doing all the transformations manually, and getting the final screen coordinates of each vertex I do the following.. In mode 1: I call glVertex2f(x,y) and draw lines to the screen. This is set up inside a glOrtho(0, 500, 0, 500) projection space, and it draws the lines exactly where normal OpenGL would through its fixed pipeline. This confirms that my math checks out through the pipeline, and I get correct screen space coordinates. In mode 2: I cast the (screenspace) x,y to integers and rasterize the line in a 500x500 pixel framebuffer. I then call glDrawPixels on this framebuffer, and display it to the screen. These two modes don't match up exactly. Mode 2 definitely has vertices that map to different locations. Question 2: I am wondering how to properly handle the rounding that OpenGL does for its screen space to discrete pixel space?

Share this post


Link to post
Share on other sites
Atrix256    539
About getting pixel perfect accuracy with your software implmentation vs your hardware, you might get it lined up with your own video card but i wouldnt be surprised if there is some variance from video card to video card.

heck, different FPUs give slightly different values for the same math operations, so i know for sure different video cards must be slightly different per pixel.

In that light you may not need the perfect precision you are looking for.

But, depending how far off your line is from the hardware rendered line, it could be a problem just of you mapping from 0-500 where OGL is mapping from 0-499?

you might kill 2 birds with 1 stone :P

Share this post


Link to post
Share on other sites
Gluc0se    146
Does anyone know how OpenGL handles the [0.0 x 500.0] -> [0, 499] problem? When you request a 500x500 window, I always see the glOrtho and glViewport calls using 0,500 as their limits. Is there yet another transformation behind the scenes that does 500->499?

Share this post


Link to post
Share on other sites
Brother Bob    10344
Consider a smaller viewport instead, and only a single dimension. Let's say a 4 pixel wide viewport.

|--x--|--x--|--x--|--x--|
0 1 2 3 4

| represents pixel borders, - is the continuous axis, and x are pixel centers. This is what you have to work with.

Notice how a coordinate of 1.0 is on the exact edge between first and second pixel. Drawing a filled primitive from 1 to 3 covers exactly the second and third pixel, but nothing of the first and fourth. So the primitive is exactly 2 pixels wide; 3-1=2. Notice how the viewport spans from 0 (the left edge) to 4 (the right edge), or 0 to width, and that there are 4 pixels covering the range 0 to 4, and the pixel centers are located at half-integer coordinates (0.5, 1.5, 2.5 and 3.5).

I think your problem is how to think about this. You think in discrete pixel coordinates. In fact, the screen space is a continuous axis with pixels covering parts of the continuous axis. A viewport covering 0 to 4 as above starts at 0 on the left edge, and 4 at the right edge. The right edge is the rightmost part of the fourth pixel, and at the same time the leftmost part of the fifth pixel (outside the scale). So if you think about in discrete pixels, you need to think in half-open ranges; start at 0 and cover 500 pixels (the last two parameters to glViewport is size, not end coordinates) means start at 0 and end at 499, which is the off-by-one you're looking for.

These rasterization rules are well-defined in the specification. There are very little, if any, room for vendor specific details here.

Share this post


Link to post
Share on other sites
Gluc0se    146
So one of the issues I'm having is that after the clipping and perspective divide I'm multiplying each vertex by a view-port matrix to determine its screen space.

For example Xs = (width / 2.0)*Xp + width/2.0 + Xv

Where Xs is the screen space, Xp is the projected X position in the canonical view-volume space, and Xv is the offset of the viewport (in our case usually 0).

Now all lines that cross the right hand side of the screen are being clipped to Xp = 1.0. In the end, this pixel will get a screen location of 500. Where should that map? Should I be clipping it to something smaller than 500 first? Or do these just land off the screen? (In your example, where does a 4 land, pixel-wise?)

I don't have the problem with the left hand side. A 0 will map to the 0 pixel.

Share this post


Link to post
Share on other sites
Brother Bob    10344
A pixel located at 500 on a viewport that ends at 500 will be on the exact edge. Looking at my simplified drawing, you will see that 500 is the rightmost edge of the rightmost pixel. The rightmost pixel center is at 499.5. So, what pixels should be drawn?

That is a question the rasterization rules defines. For lines, for example, the ideal rules follows the diamond-exit rule, which (very simplified for the purpose of explaining the principle) means you draw a pixel if the line exits the area covered by the pixel (or in the 1D-diagram in my drawing; passes the center of the pixel).

So a line coming from the left and ends at 4, which is the exact edge between the fourth and fifth pixel, passes through the fourth pixel. The fourth pixel is therefore drawn. Does it pass the fifth pixel? No, so the fifth pixel is not drawn. This makes sense if you look at the diagram, a line ending at 4 should draw the fourth pixel but not the fifth, which would be outside the viewport. The line ends at the very edge, and so the very last pixel is drawn.

Drawing a point at 4, on the other hand, is another question. Again, since the is no pixel at 4, you must chose some pixel nearby. Since it's on the exact border between two neighbouring pixels, choosing the nearest is problematic as well, since both are equally close. You just have to make some assumption; round down for example. A pixel at 3.99 draws the fourth pixel. A pixel at 4.0 rounds down and draws the fourth pixel. A pixel at 4.01 is closer to the fifth pixel, and so nothing is drawn (fifth pixel is outside the viewport, but doesn't matter, since the point itself, 4.01, is outside the viewport bound of 4 anyway).

The formula you have is correct there to convert from clip space to viewport space. But the issue is with how to treat the coordinates. A coordinate on the edge really IS on the edge. There aren't any pixels on the edge, only pixel borders.

Share this post


Link to post
Share on other sites
Gluc0se    146
Thanks for all the help Bob. Do you happen to know of any resources/information that detail OpenGL's line rasterizing algorithms? Right now I'm just calculating two vertex positions and using Bresenhams, but I'm sure they're doing something a bit different.

Share this post


Link to post
Share on other sites
Brother Bob    10344
The official API specification contains all you need; clicky. Although it may be a bit difficult to follow and understand, it describes not only all the details on how to rasterize lines, but everything you need to know about OpenGL.

For lines, you can use Bresenham's algorithm. But you then have to determine pixel coordinates for the start and end point as well. Saying the line ends at viewport coordinate 500 is not enough; you need to determine the exact pixel that end point corresponds to. That likely means the 500:th pixel (at viewport coordinate 499.5 in my diagram) that you have to use as end coordinate. Once you have the specific pixels that corresponds to start and end points, you can make the line with Bresenham's to connect the two.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now