Archived

This topic is now archived and is closed to further replies.

XzenoX

OpenGL SDL or OpenGL

Recommended Posts

XzenoX    122
We are full time students(4 of us), we plan on building a game, we will be 2 programmers and 2 artists, we planned building the game using graphics similar to Final Fantasy Tactic(combat part where your characters move freely x,y,z) or to Dragon Warrior 7(speaking of the redone version on PSX since i haven''t seen the original) which uses similar graphics in cities/caves/etc They both seem to have x,y,z axis but yet the sprites are 2d, how did they do that ? and how could we do that ? SDL seems a good choice for portability but how do you manage to do 3d graphics using SDL library? I''m not stating a debate on which one is the best, just which one is the best in these cases. Tnx

Share this post


Link to post
Share on other sites
Anesthesia    122
Good questions...

First of all, SDL is neither a 3d or 2d API. The whole point of SDL is to provide access to the screen in a portable manner. It lacks 2d rotation, scaling, etc algorithms. You'll have to figure those out yourself.

Additionally, you can make OpenGL calls that render to SDL bitmaps, meaning SDL works with OpenGL. OpenGL is a graphics API. It doesn't render images to the screen on its own. It needs a helper API with access to the screen so it can do its work. GLUT, GLAUX, WinAPI and SDL are all such helper APIs. OpenGL will calculate the mechanics of rotation, scaling, translation, etc.

Hmm, I have a vague recollection of Final Fantasy tactics. As far as I remember it was an isometric game, like Diablo. In other words it is meant to look 3d, but it isn't.

You accomplish this by taking square tiles and rotating them 45 degrees, also squashing them to look better. The tiles are regular 2d bitmaps, albeit only rotated. The characters 2d images drawn to look like they're 3d; just bitmaps drawn at isometric angles. Look up isometric game development for more information. This is how it is tiled, with rotated tiles fitting into each other like a puzzle:

<><><><><><><><
><><><><><><><>
<><><><><><><><
><><><><><><><>
<><><><><><><><
><><><><><><><>

Every even row is offset by tile_width/2. This representation is crude, because the tiles have space in between them. Normally they should fit each other like a glove. The <> is the basic shape of a tile...

Have fun!

Edited by - Anesthesia on January 3, 2002 5:20:48 PM

Share this post


Link to post
Share on other sites
Supernova    122
You can use OpenGL with SDL very easily, there's a slight difference in a few calls like buffer flipping (you have to call something like SDL_OpenGL_Flip() instead of the normal function). Other than that, you can use OpenGL just as you would without SDL.


quote:
Original post by Anesthesia

Additionally, you can make OpenGL calls that render to SDL bitmaps, meaning SDL works with OpenGL. OpenGL is a graphics API. It doesn't render images to the screen on its own. It needs a helper API with access to the screen so it can do its work. GLUT, GLAUX, WinAPI and SDL are all such helper APIs. OpenGL will calculate the mechanics of rotation, scaling, translation, etc.



That's not exactly true. OpenGL does render on its own, it just needs a window to render to (much like DirectX has to anchor itself to a window), that's all. SDL merely provides a handle and a rendering context to OpenGL. I may be a bit off because I haven't really used OpenGL directly with Windows much, I just went through SDL for the most part but I should have the right idea there. Basically what I'm trying to say is, OpenGL doesn't have to go through another API to draw stuff. This would defeat the whole purpose of OpenGL which is meant to speed up drawing by accessing hardware directly (like DirectX does).



Edited by - Supernova on January 3, 2002 5:27:44 PM

Share this post


Link to post
Share on other sites
Anesthesia    122
Obviously, you misunderstood my comment. Allow me to clarify. OpenGL DOES render indeed, it even has its own drivers to help accomplish this feat AND directly to the hardware as well BUT it does NOT do so on its own. OpenGL lacks OS-specific features in the name of portability. It needs specific information from the OS on how it should operate and once it has that information it goes with it.

What I was trying to point out is that there are no OpenGL calls to do things like create bitmaps, take care of double-buffering, etc. It's a somewhat complicated issue, but I only intended to say that because of portability reasons, OpenGL does not work on its own.

SDL, BTW, doesn't operate independently either. It is a portability layer and uses APIs from various OSes to get at the hardware.

Here is the fallacy of your logic:
"OpenGL does render on its own, it just needs a window to render to (much like DirectX has to anchor itself to a window), that's all."

To Paraphrase:
It renders on its own, but it needs something in order to render. Prime contradiction. Obviously if it is in need, then it is not independent ;-)

As far as I'm concerned, the act of rendering has the end-result of something visible on screen.

Q: If a tree fell in OpenGL and no one saw it for a lack of SDL, did it really fall?
A: No, because the code didn't compile

So don't needlessly complicate things. OpenGL cannot produce any *visible* results without a helper. SDL, GLAUX, GLU, etc are such helpers. I hope that is a more logical statement to you. Code will not even work without a rendering context, which OpenGL cannot provide.





Edited by - Anesthesia on January 3, 2002 6:25:59 PM

Share this post


Link to post
Share on other sites
Supernova    122
Yeah, what I really meant to comment on was "OpenGL will calculate the mechanics of rotation, scaling, translation, etc." because it sounds like you''re implying that''s the only thing it does. And yeah, I know SDL uses DirectX under Windows. So I guess it was a bit of a misunderstanding

Share this post


Link to post
Share on other sites
KingPin    122
quote:

Hmm, I have a vague recollection of Final Fantasy tactics. As far as I remember it was an isometric game, like Diablo. In other words it is meant to look 3d, but it isn''t.

You accomplish this by taking square tiles and rotating them 45 degrees, also squashing them to look better. The tiles are regular 2d bitmaps, albeit only rotated. The characters 2d images drawn to look like they''re 3d; just bitmaps drawn at isometric angles. Look up isometric game development for more information. This is how it is tiled, with rotated tiles fitting into each other like a puzzle:

<><><><><><><><
><><><><><><><>
<><><><><><><><
><><><><><><><>
<><><><><><><><
><><><><><><><>

Every even row is offset by tile_width/2. This representation is crude, because the tiles have space in between them. Normally they should fit each other like a glove. The <> is the basic shape of a tile...

Have fun!

Edited by - Anesthesia on January 3, 2002 5:20:48 PM



Yea, isometric engines have to be the second best invention, next to pizza of coarse!


"1-2GB of virtual memory, that''s way more than i''ll ever need!" - Bill Gates
"The Adventure: Quite possibly the game of the century!" - The Gaming Community

Share this post


Link to post
Share on other sites
XzenoX    122
TNX !
That helped a lot, didn''t new of those isometric things, i''ll be looking foward to it ^_^, tnx

I''ll read on OpenGL and SDL too since these 2 seem to work great together.

Share this post


Link to post
Share on other sites
TheRealMAN11    142
I have one more thing to add here. If you are making a 2d game you could use opengl or sdl for the graphics stuff, but if you don''t need any rotation or scaling featurs, then sdl would be the better choice to use. I started a 2d game intended to run on an older system using opengl for graphics, and it ran very slow, so I switched to sdl which nearly doubled the frame rate. So I would recommend using sdl for graphics (and sound and input for that matter).


It is foolish for a wise man to be silent, but wise for a fool.

Matthew
WebMaster
www.Matt-Land.com

All your Xbox base are belong to Nintendo.

Share this post


Link to post
Share on other sites
Anesthesia    122
I suppose you''re right SuperNova about it looking like I was saying that''s all it does. Thank you for helping to clarify that.

TheREALMan11 has a point. Considering that you want to do an isometric game, you don''t absolutely need OpenGL. It will hamper your performance on machines without 3d acceleration. Some people think that it''s cool to add 3d effects to their 2d games (like Blizzard with Diablo II and Glide - another now useless API), and others think it''s nicer simply to blit the image onto a polygon, so that all of the rotation, scaling etc can be taken care of with OpenGL calls, and they''ll be 3d accelerated as well.

That''s why it helps to determine what the target system is and moreover whether or not you want to require people to use a 3d Accelerated API to play a 2d game.



Share this post


Link to post
Share on other sites

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now