• Advertisement

Archived

This topic is now archived and is closed to further replies.

OpenGL Using OpenGL for 2D game.

This topic is 5244 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey folks, I''m an SDL user and I also posted this question to the mailing list, but I figured this would be a good place to ask as well. In any case: I wanted to know what would be the best approach to my current game project which I am considering using OpenGL for.My game is an RPG which shifts between two modes, an Isometric perspective like Diablo, and a nice 3D dungeon crawl mode (using GL), much like the old Eye of the Beholder games. I''m considering doing both the 2D (isometric) portion and the 3D portion using OpenGL. But I am wondering if I will see good speed results this way? Will it be faster to do the 2d Part using regular blitting through SDL(a great 2D graphics library for those who don''t know), or OpenGL? One of the main reasons I want to use OpenGL for this portion, is for the simplicity in implementing lighting, scaling, rotation, etc. Also, I want to do neat 3d spell effects, the whole gamut of nice 3d eye candy on a 2d game board. What would be the best method for going about this in OpenGL? Would the best method be to render the 2D tile map onto an OpenGL surface every frame, as you would in regular 2D? Or, am I better off just sticking to 2D methods all together? I have never attempted something like this before, so excuse my ignorance. Any suggestions, or pros and cons you could give me would be gerat, as I am still in the initial design stages. I don''t wanna commit to something less I know it will work Thanks alot!

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
GL for 2d stuff works great, or at least for the stuff I''ve tried.

What you might want to play with is Orth projection and set the camera on the z axis (with y as up). this way you can work using just x and y coords like a 2d screen.

Is this clear, or should I try this again?

Armand

Share this post


Link to post
Share on other sites
Like the AP said, you can render in 2D if you change the current projection matrix to the orthographic projection, instead of the perspective projection. Here are two useful functions for switching between 2D and 3D rendering:


// Enables ortho mode for 2D rendering

void Enable2D()
{
int viewPort[4];
glGetIntegerv(GL_VIEWPORT, viewPort);

glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();

glOrtho(0, viewPort[2], 0, viewPort[3], -1, 1);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}

//////////////////////////////////////////////////////


// Disables ortho mode and returns back to

// perspective projection mode for 3D rendering

void Disable2D()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();

glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}


So whenever you want to render in 2D simply call 'Enable2D()', and then when you're finished your 2D rendering, return back to 3D rendering with 'Disable2D()'. Note that when you're in 2D mode, you supply 2D coordinates such as 'glVertex2f(x, y)' instead of 'glVertex3f(x, y, z)'.

[edited by - chacha on October 9, 2003 12:31:08 AM]

Share this post


Link to post
Share on other sites
just a minor comment on this method. If u plan to do 2d in opengl, itrs tru ortho is the way to go. But if u wanna harness some of the additional power of opengl for effects which MAY not be limited to a 2D plane (say a 3d-explosion seen from birds-eye-view) then u may need to tweak the ortho settings.

chacha''s code example specifies the ortho''s Z-near and Z-far parameters as -1, 1 which is also wat happend if u use gluOrtho2D(...). But u may not want to limit your z-range between those values. U can still have depth for purposes of any special features by increasing the range for Z. Dun worry though, ocz the ortho projection will maintain the top-down-like view irrespective of the z-range.

Also, you CAN still use glVertex3f if u want, but its pointless unless u really need the z-coords. And if u need to use it anyway, u''ll probably need to change the z-range for projection anyway

Share this post


Link to post
Share on other sites
If you want to use glOrtho() go ahead, and use glVertex3f() if you want. The Z axis is still their and you can move objects on Z if you want. Problem is objects will not be perspective corrected, again objects appear the same regardless of distance on Z. glOrtho is the way to go with 2D games because the screen coordinates map 1to1 with the screen resolution. If you do use glOrtho() and use glVertex3f() and use the Z axis and plan on moving objects on Z make sure you expand near and far planes to somethign like 2000 -2000 so you can see your objects or else they will be cut in half or not show up at all.

Share this post


Link to post
Share on other sites
quote:
Original post by CraZeE
chacha's code example specifies the ortho's Z-near and Z-far parameters as -1, 1 which is also wat happend if u use gluOrtho2D(...). But u may not want to limit your z-range between those values. U can still have depth for purposes of any special features by increasing the range for Z. Dun worry though, ocz the ortho projection will maintain the top-down-like view irrespective of the z-range.

Also, you CAN still use glVertex3f if u want, but its pointless unless u really need the z-coords. And if u need to use it anyway, u'll probably need to change the z-range for projection anyway

Well said, I forgot to mention the alternatives.

[edited by - chacha on October 10, 2003 3:18:52 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Raduprv
Why not making it isometric 3D instead?


I did actually

I already built the engine using full 3D polygons. But I decidede that tiles are alot prettier.

Take a look at the tiles I wan't to use:

http://www.starmars.com/tiles.png

and here is what the engine curently looks like (full 3d):

http://www.starmars.com/worldScreen.png

Granted, with some better models etc. Im sure the 3d one would look nice and spiffy But that's not the look I wan't! I wan't nice pre-rendered sprites and tiles in this game view.

Thank you all for your suggestions by the way. I am deffinetly going to use GL, and GLOrtho.

One more question, when rendering the tiles, how should it be done? Should I render them to individual Quads and then let OpenGL handle them as polygons, or is there a betterm faster method?

Remember, I wan't to take advantage of lighting, rotation, etc.

Thanks alot!

EDIT: What's the code for hyperlinks on this board? :D

[edited by - psyjax on October 10, 2003 11:05:47 AM]

Share this post


Link to post
Share on other sites
You'll be feeding GL your sprites as textured quads, yes. Of course, you still have access to all the other stuff, so you're welcome to use triangles etc. Quads just generally make the most sense. The only project that springs to mind that uses 2D OpenGL extensively is Chromium, makbe you'd like to have a glance at the way the author of Chromium handled GL.

As for links, see the faq.


[edited by - jcspray on October 10, 2003 5:59:33 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by psyjax
quote:
Original post by Raduprv
Why not making it isometric 3D instead?


I did actually



Good
Look at some screenshots here (from my game), just to give you some insight on what you can do with the OpenGL ortho mode:

http://www.eternal-lands.com/index.php?main=screenshots.php

Height Map Editor | Eternal Lands | Fast User Directory

Share this post


Link to post
Share on other sites
Wow! looks really nice. I agree it looks very good, but it''s not exactly what I wan''t for my project. I want a nice Iso view kinda like civilization III. With 3D effects on top.

Most of you seem to suggest mapping the tiles onto Quads then blitting them to screen. This would be easy for square tiles, but what about isometric raster tiles? Like the ones I show in my screenshot above.

This one is giving me a run for my money :D

Thanks to everyone, for your comments so far.

Share this post


Link to post
Share on other sites
I highly suggest you to go in real 3D, but isometric, like I do. So map the tiles on quads.
The reason behind this is that it will be much easier for you to keep the consistency of the world, rather than render every thing via a different method.
Second, giving the user the ability to rotate the camera can''t hurt

Height Map Editor | Eternal Lands | Fast User Directory

Share this post


Link to post
Share on other sites

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
    • By Andrey OGL_D3D
      Hi all!
      I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).
      The effect contains the following passes:
      1) Depth scene pass;
      2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.
      3) Shafts pass texture +  RGBA BackBuffer texture.
      Shafts shader for 2 pass:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif void main(void) {     vec2 uv = tex;     float sceneDepth = texture2D(DepthSampler, uv.xy).r;     vec4  scene        = texture2D(FullSampler, tex);     float fShaftsMask     = (1.0 - sceneDepth);     gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4    ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate  float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) {   vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b);   vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b);       // TODO: To look in Crysis what it the shit???   //return ( b < 0.5 )? c : d;   return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) {     vec4 sun_pos = Sun_pos;     vec2    sunPosProj = sun_pos.xy;     //float    sign = sun_pos.w;     float    sign = 1.0;     vec2    sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5));     float    sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y ));     sunVec *= ShaftParams.x * sign;     vec4 accum;     vec2 tc = Tex_UV.xy;     tc += sunVec;     accum = texture2D(BlurSampler, tc);     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.875;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.75;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.625;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.5;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.375;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.25;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.125;     accum  *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0);           accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9));     vec4    cScreen = texture2D(FullSampler, Tex_UV.xy);           vec4    cSunShafts = accum;     float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0;              float fBlend = cSunShafts.w;     vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0);     accum =  cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen);     accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5);     gl_FragColor = accum; } Demo project:
      Demo Project
      Shaders for postprocess Shaders/SunShaft/
      What i do wrong ?
      Thanks!
       


  • Advertisement