Sign in to follow this  
Gluc0se

OpenGL Questions about OpenGL Screen Space (in pixels)

Recommended Posts

So I am creating a software implementation of OpenGL and I have two questions I hope to have answered? After clipping a line in homogeneous space to the [-1, 1] canonical view volume, I transform each vertex with a viewport matrix(0, 0, 500, 500). As a result I get x,y values that range from 0.0 to 500.0. Now I am rasterizing these pixels to a 500x500 pixel framebuffer, so my accepted pixel indices are [0-499]. So the problem I have is that any vertex that I clipped to 1.0 will now map to 500 and be 'outside' the range of my array of pixels. Question 1: Any advice for how to correct this off-by-one-like issue? The other problem is that my implementation doesn't line up on a per-pixel-basis with OpenGL's. Currently I have two ways of drawing my lines. After doing all the transformations manually, and getting the final screen coordinates of each vertex I do the following.. In mode 1: I call glVertex2f(x,y) and draw lines to the screen. This is set up inside a glOrtho(0, 500, 0, 500) projection space, and it draws the lines exactly where normal OpenGL would through its fixed pipeline. This confirms that my math checks out through the pipeline, and I get correct screen space coordinates. In mode 2: I cast the (screenspace) x,y to integers and rasterize the line in a 500x500 pixel framebuffer. I then call glDrawPixels on this framebuffer, and display it to the screen. These two modes don't match up exactly. Mode 2 definitely has vertices that map to different locations. Question 2: I am wondering how to properly handle the rounding that OpenGL does for its screen space to discrete pixel space?

Share this post


Link to post
Share on other sites
About getting pixel perfect accuracy with your software implmentation vs your hardware, you might get it lined up with your own video card but i wouldnt be surprised if there is some variance from video card to video card.

heck, different FPUs give slightly different values for the same math operations, so i know for sure different video cards must be slightly different per pixel.

In that light you may not need the perfect precision you are looking for.

But, depending how far off your line is from the hardware rendered line, it could be a problem just of you mapping from 0-500 where OGL is mapping from 0-499?

you might kill 2 birds with 1 stone :P

Share this post


Link to post
Share on other sites
Does anyone know how OpenGL handles the [0.0 x 500.0] -> [0, 499] problem? When you request a 500x500 window, I always see the glOrtho and glViewport calls using 0,500 as their limits. Is there yet another transformation behind the scenes that does 500->499?

Share this post


Link to post
Share on other sites
Consider a smaller viewport instead, and only a single dimension. Let's say a 4 pixel wide viewport.

|--x--|--x--|--x--|--x--|
0 1 2 3 4

| represents pixel borders, - is the continuous axis, and x are pixel centers. This is what you have to work with.

Notice how a coordinate of 1.0 is on the exact edge between first and second pixel. Drawing a filled primitive from 1 to 3 covers exactly the second and third pixel, but nothing of the first and fourth. So the primitive is exactly 2 pixels wide; 3-1=2. Notice how the viewport spans from 0 (the left edge) to 4 (the right edge), or 0 to width, and that there are 4 pixels covering the range 0 to 4, and the pixel centers are located at half-integer coordinates (0.5, 1.5, 2.5 and 3.5).

I think your problem is how to think about this. You think in discrete pixel coordinates. In fact, the screen space is a continuous axis with pixels covering parts of the continuous axis. A viewport covering 0 to 4 as above starts at 0 on the left edge, and 4 at the right edge. The right edge is the rightmost part of the fourth pixel, and at the same time the leftmost part of the fifth pixel (outside the scale). So if you think about in discrete pixels, you need to think in half-open ranges; start at 0 and cover 500 pixels (the last two parameters to glViewport is size, not end coordinates) means start at 0 and end at 499, which is the off-by-one you're looking for.

These rasterization rules are well-defined in the specification. There are very little, if any, room for vendor specific details here.

Share this post


Link to post
Share on other sites
So one of the issues I'm having is that after the clipping and perspective divide I'm multiplying each vertex by a view-port matrix to determine its screen space.

For example Xs = (width / 2.0)*Xp + width/2.0 + Xv

Where Xs is the screen space, Xp is the projected X position in the canonical view-volume space, and Xv is the offset of the viewport (in our case usually 0).

Now all lines that cross the right hand side of the screen are being clipped to Xp = 1.0. In the end, this pixel will get a screen location of 500. Where should that map? Should I be clipping it to something smaller than 500 first? Or do these just land off the screen? (In your example, where does a 4 land, pixel-wise?)

I don't have the problem with the left hand side. A 0 will map to the 0 pixel.

Share this post


Link to post
Share on other sites
A pixel located at 500 on a viewport that ends at 500 will be on the exact edge. Looking at my simplified drawing, you will see that 500 is the rightmost edge of the rightmost pixel. The rightmost pixel center is at 499.5. So, what pixels should be drawn?

That is a question the rasterization rules defines. For lines, for example, the ideal rules follows the diamond-exit rule, which (very simplified for the purpose of explaining the principle) means you draw a pixel if the line exits the area covered by the pixel (or in the 1D-diagram in my drawing; passes the center of the pixel).

So a line coming from the left and ends at 4, which is the exact edge between the fourth and fifth pixel, passes through the fourth pixel. The fourth pixel is therefore drawn. Does it pass the fifth pixel? No, so the fifth pixel is not drawn. This makes sense if you look at the diagram, a line ending at 4 should draw the fourth pixel but not the fifth, which would be outside the viewport. The line ends at the very edge, and so the very last pixel is drawn.

Drawing a point at 4, on the other hand, is another question. Again, since the is no pixel at 4, you must chose some pixel nearby. Since it's on the exact border between two neighbouring pixels, choosing the nearest is problematic as well, since both are equally close. You just have to make some assumption; round down for example. A pixel at 3.99 draws the fourth pixel. A pixel at 4.0 rounds down and draws the fourth pixel. A pixel at 4.01 is closer to the fifth pixel, and so nothing is drawn (fifth pixel is outside the viewport, but doesn't matter, since the point itself, 4.01, is outside the viewport bound of 4 anyway).

The formula you have is correct there to convert from clip space to viewport space. But the issue is with how to treat the coordinates. A coordinate on the edge really IS on the edge. There aren't any pixels on the edge, only pixel borders.

Share this post


Link to post
Share on other sites
Thanks for all the help Bob. Do you happen to know of any resources/information that detail OpenGL's line rasterizing algorithms? Right now I'm just calculating two vertex positions and using Bresenhams, but I'm sure they're doing something a bit different.

Share this post


Link to post
Share on other sites
The official API specification contains all you need; clicky. Although it may be a bit difficult to follow and understand, it describes not only all the details on how to rasterize lines, but everything you need to know about OpenGL.

For lines, you can use Bresenham's algorithm. But you then have to determine pixel coordinates for the start and end point as well. Saying the line ends at viewport coordinate 500 is not enough; you need to determine the exact pixel that end point corresponds to. That likely means the 500:th pixel (at viewport coordinate 499.5 in my diagram) that you have to use as end coordinate. Once you have the specific pixels that corresponds to start and end points, you can make the line with Bresenham's to connect the two.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627757
    • Total Posts
      2978950
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now