Advertisement Jump to content


This topic is now archived and is closed to further replies.


OpenGL Using OpenGL for 2D work

This topic is 6892 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''m interested in using OpenGL to work on a 2d game(for the contest, so feel free not to answer to cut competition ). What do I need to do this? From what I can tell, just use glOrtho() to set up a parallel projection, and from there just use the 2d versions of glVertex and the like. Is there any additional setup that I would need to do? And also, will I need to actually specify screen locations, or do I get to keep the relative coordinates GL usually works with? Or does that depend entirely on what I pass to glOrtho()? TIA Jonathan

Share this post

Link to post
Share on other sites
I think I figured this out, so I''m going to go ahead and post up my solution right here for anyone else who might be curious about this. So stop reading now if you don''t care.

Alright, you want to use OpenGL for your blitting and stuff? Just to point out the advantages of this, you get to use your video card''s 3d acceleration to pull off everything you want to do 2d. This includes alpha blending and fun stuff like that. This is good thing.

So here''s what you do. To work exactly in pixels, like you would if you were using DirectDraw to do your stuff, you want to set up your projection matrix to reflect that. How do you do that, you ask? It''s easy. Say you''re working in 640x480. You then set up your projection matrix using a call to glOrtho, like so:
glOrtho(0, 640, 0, 480, -100, 100);

Now, we have our projection matrix set up to do a parallel projection, which means no skewing of how things look because of perspectives and stuff like that. And since we set the matrix to have the same dimensions as the resolution we''re in, we can just specify coordinates by the pixel. Whoooohoooo!

Actually, I lied. OpenGL treats the lower left corner of the window as (0, 0), cause they like normal cartesian graphs. I haven''t actually gotten around to testing this yet, but I think this means you''ll have to change your y-coordinate a little to get things to display where you want. Specifically, you need to change it to (480-y). That''ll effectively flop the y-axis around so that it''s more where us graphics programmers are used to.

Of course, that''s entirely up to you. You don''t have to do it that way. You can use the backwards axis, I don''t care.

Umm... that''s about it, I think. I''m sure one could set up one''s projection matrix so that you can use relative coordinates instead of the absolute ones, possibly allowing for ease of supporting multiple resolutions in a 2d game. Filtering done by the graphics card could then scale your bitmaps for you. Hmm... the possibilities astound the mind.

Oh, and as for those last two numbers in the glOrtho call, those set up your near and far z-clipping planes, respectively. Since you won''t actually be passing z-coordinates to the card(will you?), these probably don''t matter. But those numbers work nicely

That''s it for me. Have a good night, and eat your carrots. You know you look at a monitor long enough you could use the extra beta-carotin(spelling?).


P.S. - I figured this out from reading through Nehe''s latest font tutorial. They switch to absolute coordinates for text blitting in it.

Share this post

Link to post
Share on other sites
This is certainly a rather slick way of doing advanced 2D operations like alpha, hardware assisted rotation and scaling, etc, assuming that users will have a 3D accelerated card.

In addition to Jonathan''s info, anyone interested might want to check out the OpenGL driver I wrote for Genesis3D. You can find it at
Source is downloadable.

The reason this is relevent is that the Genesis3D driver interface includes functions for doing 2D graphic blits. Since the generic copy-pixel routines in OpenGL
perform rather badly with most drivers, 2D blits in the
driver are implemented as texture mapped polygons.
Given that the code isn''t overly long (it''s not embedded
deep within spaghetti engine code) the projection matrix setup, texture object binding and polygon draw code are pretty easy to understand, IMO.

Share this post

Link to post
Share on other sites
Remember that if you''ll make a 2D program using OpenGL you won''t have to create a depth buffer. So don''t enable GL_DEPTH_TEST and set a 0 at the PIXELFORMATDESCRIPTOR ''s depthbuffer.


Share this post

Link to post
Share on other sites
To answer one of Jonathan''s original questions, there is a 2d version of glVertex, which is glVertex2?, where ? is whatever type you want. This is equivalent to a glVertex3? call with the z component of zero.

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!