Sign in to follow this  
CirdanValen

OpenGL 1 to 1 size precision in ortho

Recommended Posts

I'm in the process of learning the new way of doing things in OpenGL using shaders and my own matrices. My question is, which matrix do I need to transform, and how, to achieve 1 to 1 coordinates and sizes? For example, since my game is 2D, sprites are generally going to be a specific size such as 32px by 32px. In my program I need to be able to define the quad's size to be 32x32 pixels so the texture doesn't get skewed. Is there a better way to handle this besides tracking the screen resolution through the whole program and working with normalized coordinates? I know this was possible with OpenGL's fixed pipeline, just not sure how to achieve the same thing with the programmable pipeline.

Share this post


Link to post
Share on other sites
just create an ortho matrix as your view matrix.

the doc's specifically specify how to construct such a matrix: [url="http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml"]http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml[/url]

Share this post


Link to post
Share on other sites
Yea, I already have an ortho matrix setup. My problem is that when I create a polygon, the coordinates are normalized. So when I define the polygon, 1.0 is the far right of the screen, 0.0 is the center and -1.0 is the far left. I can translate the view matrix so 0.0 is the top corner, but the scale is still normalized. I want to be able to define the polygon in terms of pixel size.

Share this post


Link to post
Share on other sites
[quote name='CirdanValen' timestamp='1341504131' post='4956014']
My problem is that when I create a polygon, the coordinates are normalized. So when I define the polygon, 1.0 is the far right of the screen, 0.0 is the center and -1.0 is the far left.
[/quote]
This isn't quite true. When you define a polygon, it's in whatever coordinates you want it to be. All that matters is that when drawing them, you transform those coordinates to screen space ([-1,1] on x/y) in your vertex shader. If you're using glOrtho, this is accomplished by passing left,right,top,bottom to be 0,viewport_width,0,viewport_height. You can also accomplish this in your vertex shader by passing in 2/viewport_dims to as a uniform vector, mulitiplying your view-space vertices X and Y coordinates by it, and subtracting 1.0 from the resulting vector's X and Y cooridnates.

In short:
1. scale/rotate your sprite quad
2. translate your sprite quad (in pixels)
3. translate your sprite quad by -1.0*Camera_position (also in pixels)
> steps 1 to 3 are traditionally concatenated into your modelView transform

4. scale your sprite quad by 1.0/(screenX,screenY) and subtract (1.0,1.0) to put it into "screen space" > This is your "Projection" transform, what glOrtho makes.

The old fixed-function pipeline did this for glVertexi calls, more or less. It just was able to get screen size from the viewport state.

To make life easier, you might want to generate only a single 1.0x1.0 quad, and multiply it by (spriteScale.xy*spriteDimensions.xy) in the "scale sprite" step of your vertex shader. This means that each sprite only has to send across two integer values instead of switching around vertex buffers, and most of your code can still be written in pixel dimensions. Edited by Alukien

Share this post


Link to post
Share on other sites
[quote name='CirdanValen' timestamp='1341504131' post='4956014']
Yea, I already have an ortho matrix setup. My problem is that when I create a polygon, the coordinates are normalized. So when I define the polygon, 1.0 is the far right of the screen, 0.0 is the center and -1.0 is the far left. I can translate the view matrix so 0.0 is the top corner, but the scale is still normalized. I want to be able to define the polygon in terms of pixel size.
[/quote]

if your doing this all in a shader, than your shader should look something like this for an ortho view:

[code]
uniform mat4 OrthoMatrix;

in vec4 i_Vertex; //In Vertex;

void main(void){
gl_Position = OrthoMatrix*i_Vertex; //Scale vertex to your ortho matrix.
}
[/code]

glOrtho does not work with shaders specification above 1.20(you can use w/e matrix mode you call glOrtho in a shader via gl_ModelViewMatrix, or gl_Perspective matrix in shader's <= 1.20 specs), it's for a fixed function pipeline only, you still need to multiply the vertex's in the shader.


if you feel you are doing all this, and still getting incorrect results, post some code for us to see.

Share this post


Link to post
Share on other sites
If your ortho matrix is equal to your resoultion, then just make a quad that is -.5 to .5. Drawing a sprite 32x32 would then call glScalef making it -.5*32 = -16 and .5*32 = 16

Your sprite is now 32 pixels big.

Share this post


Link to post
Share on other sites
Thanks for the help guys. The problem was that I wasn't calculating the view matrix correctly. I was double checking my code and [url=http://en.wikipedia.org/wiki/Orthographic_projection_(geometry)]the formula[/url] and I had the scaling and and the translation calculations switched.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this