• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

# 1 to 1 size precision in ortho

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

6 replies to this topic

### #1CirdanValen  Members   -  Reputation: 148

Like
0Likes
Like

Posted 05 July 2012 - 12:26 AM

I'm in the process of learning the new way of doing things in OpenGL using shaders and my own matrices. My question is, which matrix do I need to transform, and how, to achieve 1 to 1 coordinates and sizes? For example, since my game is 2D, sprites are generally going to be a specific size such as 32px by 32px. In my program I need to be able to define the quad's size to be 32x32 pixels so the texture doesn't get skewed. Is there a better way to handle this besides tracking the screen resolution through the whole program and working with normalized coordinates? I know this was possible with OpenGL's fixed pipeline, just not sure how to achieve the same thing with the programmable pipeline.

### #2slicer4ever  Crossbones+   -  Reputation: 1425

Like
1Likes
Like

Posted 05 July 2012 - 12:39 AM

just create an ortho matrix as your view matrix.

the doc's specifically specify how to construct such a matrix: http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.

### #3CirdanValen  Members   -  Reputation: 148

Like
0Likes
Like

Posted 05 July 2012 - 10:02 AM

Yea, I already have an ortho matrix setup. My problem is that when I create a polygon, the coordinates are normalized. So when I define the polygon, 1.0 is the far right of the screen, 0.0 is the center and -1.0 is the far left. I can translate the view matrix so 0.0 is the top corner, but the scale is still normalized. I want to be able to define the polygon in terms of pixel size.

### #4Alukien  Members   -  Reputation: 123

Like
0Likes
Like

Posted 05 July 2012 - 11:55 AM

My problem is that when I create a polygon, the coordinates are normalized. So when I define the polygon, 1.0 is the far right of the screen, 0.0 is the center and -1.0 is the far left.

This isn't quite true. When you define a polygon, it's in whatever coordinates you want it to be. All that matters is that when drawing them, you transform those coordinates to screen space ([-1,1] on x/y) in your vertex shader. If you're using glOrtho, this is accomplished by passing left,right,top,bottom to be 0,viewport_width,0,viewport_height. You can also accomplish this in your vertex shader by passing in 2/viewport_dims to as a uniform vector, mulitiplying your view-space vertices X and Y coordinates by it, and subtracting 1.0 from the resulting vector's X and Y cooridnates.

In short:
> steps 1 to 3 are traditionally concatenated into your modelView transform

4. scale your sprite quad by 1.0/(screenX,screenY) and subtract (1.0,1.0) to put it into "screen space" > This is your "Projection" transform, what glOrtho makes.

The old fixed-function pipeline did this for glVertexi calls, more or less. It just was able to get screen size from the viewport state.

To make life easier, you might want to generate only a single 1.0x1.0 quad, and multiply it by (spriteScale.xy*spriteDimensions.xy) in the "scale sprite" step of your vertex shader. This means that each sprite only has to send across two integer values instead of switching around vertex buffers, and most of your code can still be written in pixel dimensions.

Edited by Alukien, 05 July 2012 - 11:56 AM.

### #5slicer4ever  Crossbones+   -  Reputation: 1425

Like
0Likes
Like

Posted 05 July 2012 - 02:52 PM

Yea, I already have an ortho matrix setup. My problem is that when I create a polygon, the coordinates are normalized. So when I define the polygon, 1.0 is the far right of the screen, 0.0 is the center and -1.0 is the far left. I can translate the view matrix so 0.0 is the top corner, but the scale is still normalized. I want to be able to define the polygon in terms of pixel size.

```uniform mat4 OrthoMatrix;

in vec4 i_Vertex; //In Vertex;

void main(void){
gl_Position = OrthoMatrix*i_Vertex; //Scale vertex to your ortho matrix.
}
```

glOrtho does not work with shaders specification above 1.20(you can use w/e matrix mode you call glOrtho in a shader via gl_ModelViewMatrix, or gl_Perspective matrix in shader's <= 1.20 specs), it's for a fixed function pipeline only, you still need to multiply the vertex's in the shader.

if you feel you are doing all this, and still getting incorrect results, post some code for us to see.
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.

### #6dpadam450  Members   -  Reputation: 534

Like
0Likes
Like

Posted 05 July 2012 - 04:57 PM

If your ortho matrix is equal to your resoultion, then just make a quad that is -.5 to .5. Drawing a sprite 32x32 would then call glScalef making it -.5*32 = -16 and .5*32 = 16

Your sprite is now 32 pixels big.

### #7CirdanValen  Members   -  Reputation: 148

Like
0Likes
Like

Posted 06 July 2012 - 11:42 AM

Thanks for the help guys. The problem was that I wasn't calculating the view matrix correctly. I was double checking my code and the formula and I had the scaling and and the translation calculations switched.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS