# SDL + OpenGL = coordinates confusion

This topic is 3012 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, i have little problem with coordinates in OpenGL mixed with SDL. I mean usually when I program in ogl i just use some - i don't know, (virtual ?) coordinates - which means that point (0,0,0) is in the middle of screen and I use vectors like a [1, 2, 3] I mean coordinates in pure OpenGL aren't pixels. They just are relative. But i was looking at the code SDL+OGL i saw once snippet like this:
glBegin( GL_QUADS );
glColor4f( 1.0, 1.0, 1.0, 1.0 );
glVertex3f( 0, 0, 0 );
glVertex3f( SQUARE_WIDTH, 0, 0 );
glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 );
glVertex3f( 0, SQUARE_HEIGHT, 0 );
glEnd();

This code comes from LazyFoo lesson no. 36 and SQUARE_WIDTH == SQUARE_HEIGHT == 20. So this is typical SDL's style of coordinates, i mean pixel-style. Point (0,0,0) is in upper left corner. Ok, let's say this is ok for me :) But here comes the Nehe's lesson for instance lesson no. 5 and code is as below:
glBegin(GL_TRIANGLES);
glColor3f(1.0f,0.0f,0.0f);
glVertex3f( 0.0f, 1.0f, 0.0f);
glColor3f(0.0f,1.0f,0.0f);
glVertex3f(-1.0f,-1.0f, 1.0f);
glColor3f(0.0f,0.0f,1.0f);
glVertex3f( 1.0f,-1.0f, 1.0f);
//and so on...
glEnd();

So here is SDL conversion of this lesson by Ti Leggett (http://nehe.gamedev.net/data/lessons/linuxsdl/lesson05.tar.gz) and as far as i can see there is just normal OpenGL-style of coordinates insteed of previous SDL-style from LazyFoo. So i know both methodes of representing position of vertex is valid, but how SDL will know which one we are using ? For example when i write code with glVertex(0.0f, 1.0f, 0.0f) how SDL will know that it isn't (0,1,0) pixel but some opengl point ? Where and how i can tell sdl which one style i choose ? And what about mouse events ? They are given with pixels so how do i convert them to opengl coordinates ? Maybe i should use only sdl-style ? I am SDL newbie and i'm just trying to master this library to connect it with OGL because i think SDL is simply awesome :) Thanks for any answers :) PS: Sorry for my english, mates.

##### Share on other sites
Vertices are transformed into window co-ordinates using the current modelview and projection matrices. Googling the terms GL_MODELVIEW and GL_PROJECTION might give you some clues as to how to set them up correctly.

##### Share on other sites
I don't know if you understood my problem. My question is why once in SDL, in calls to glVertex3f i have co-ordinates where point (0, 0, 0) is in the upper left corner and the other time glVertex3f() has coordinastes like a regular OpenGL program which means middle of the screen is (0,0,0). I am confused :/ Maybe you just answered me i was reading about GL_MODELVIEW and GL_PERSPECTIVE and pipeline and other stuff but i still don't know. I've never seen something like this before. It always was pure OpenGL co-oridinates...maybe i'm just crazy

PS: Oh i think i get it.. it has something to due with glOrtho in LazyFoo. He could use "pixel coordinates" because he sets up glOrtho like this:

glMatrixMode( GL_PROJECTION ); glLoadIdentity(); glOrtho( 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, -1, 1 );

I think this is it. He sets just whole screen as a frustrum so pixel-ish co-ordinates was ok (thanks to ortogonal view). This whole problem was only in my head, so yep i'm crazy :/

##### Share on other sites
SDL features a straightforward 2D graphics API; the cords directly translate to pixels on your screen. In a 1280x1024 screen, a pixel at 640x512 is in the middle.

This has no direct bearing on anything OpenGL does. When you tell SDL to setup an OpenGL window, SDL is going to do nothing graphically except make a window.

SO. On to OpenGL.

You feed it coordinates which, as a vector, are multiplied with a matrix that will "project" these coordinates into the screen. The result is a 2D screen coordinate.

Look at this here. Source and executable.

The standard "3D model", and what I use in that little program, is a projection matrix. It makes it look like it goes into the screen, and the input cordinates are the standard, meaningless "units" you're familiar with.

Another matrix you can use, instead of a projection matrix is a modelview matrix. You load this up as your projection matrix, and there is no getting-smaller-with-distance effect.

With the right settings, when you input regular pixel coordinates, regular pixel coordinates pop back out.

EDIT: Just noticed you figured it out. Nevermind.

##### Share on other sites
0, 0 SHOULD be the upper left corner of the screen when using 2D. If you want to use the openGL code you posted above, you first have to change where the local 0,0 is, by using glTranslatef( SCREEN_WIDTH / 2, SCREEN_HEIGHT / 2, 0 );

This will make those new coordinates 0, 0, 0 for the time being, which you can then use to do things like glVertex2f( -1, 1 ); etc.

After drawing the object on the screen (ie, after glEnd();) you then pop back to the identity matrix (0, 0, 0 being the top left corner) by using glLoadIndentity();

But remember, everything here is now deprecated and not good to use. You SHOULD be using shaders, but when learning, this stuff is fine.

Here is a code example:
void draw_square(){    glTranslatef( SCREEN_WIDTH / 2, SCREEN_HEIGHT / 2, 0 );    glColor3f( 1.0, 1.0, 1.0 );    glBegin( GL_POLYGON );        glVertex2f( -1.0, -1.0 );        glVertex2f(  1.0, -1.0 );        glVertex2f(  1.0,  1.0 );        glVertex2f( -1.0,  1.0 );    glEnd();    glLoadIdentity();}

(EDIT: For some reason I though you said 2D, but everything I said still stands for 3D, except as you mentioned, 0, 0, 0 should be the middle of the screen (for a static camera).

##### Share on other sites
Quote:
 Original post by JohnnyDreadBut remember, everything here is now deprecated and not good to use. You SHOULD be using shaders, but when learning, this stuff is fine.

This topic should be closed now because my problem is solved, but my last question: Why ? Why its deprecated and what ? That code which you posted below ? How shaders deal with coordinates ?

##### Share on other sites
Quote:
Original post by doles
Quote:
 Original post by JohnnyDreadBut remember, everything here is now deprecated and not good to use. You SHOULD be using shaders, but when learning, this stuff is fine.

This topic should be closed now because my problem is solved, but my last question: Why ? Why its deprecated and what ? That code which you posted below ? How shaders deal with coordinates ?

All fixed function pipeline calls are deprecated. This includes functions such as glVertex3f, glTranslatef, glLoadIdentity etc. They are deprecated, because they have very bad performance and will slow your application down if you start to render more vertices. Refer to OpenGL 3.2 Quick Reference cards for deprecated functions list http://www.khronos.org/files/opengl-quick-reference-card.pdf

##### Share on other sites
Quote:
 0, 0 SHOULD be the upper left corner of the screen when using 2D.
I may be misunderstanding you here, but in OpenGL you can set up the coordinate system however you want in 2D - +y up or down, origin in the upper left, lower left, center, etc. There's no reason that the origin has to be in the upper left.

Or were you referring to SDL's software rendering functions?

##### Share on other sites
I said SHOULD, not IS. It can be whatever you want, but the standard setup in 2D is 0,0 in the top left corner, right is x+ and down is y+.

##### Share on other sites
Quote:
 I said SHOULD, not IS. It can be whatever you want, but the standard setup in 2D is 0,0 in the top left corner, right is x+ and down is y+.
I agree that it's a common convention (historically, at least), but I guess I don't understand the 'should' part. Why should placing the origin in the top left be preferred over any other configuration? When using a graphics API such as OpenGL or Direct3D, there's no reason (that I can think of at least) to favor that particular configuration. Also, for some types of games, other configurations might make more sense. For example, for a side scroller, it might make sense for y = 0 to represent the 'ground' plane, and for increasing y values to represent increasing height.

So I guess my question is, why should an upper-left origin be favored when using an API such as OpenGL or Direct3D?

• 16
• 9
• 13
• 41
• 15