Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


OpenGL OpenGL and SDL. Is there an easy way?

This topic is 6016 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

A couple weeks ago I decided to spend my winter break teaching myself 3D/Game programming. So, I started out by looking for a good website that contained all the information that I would need. I then went through and looked into what 3D, windowing, and input APIs I would need. From what I have found there are two options: Direct X (with Direct 3D) and SDL (with OpenGL). So I then went and looked for some tutorials on how to get all this working. Seeing as how I am a big fan of cross platform compatability I decided to go with SDL/OpenGL. That was where my problems began. What I was hoping for was to be able to just sit down and start coding. What I ended up doing instead was spending the next 3 days tring to figure out how to get a bare bones program to compile using anything other than MSVC++. Everywhere I go there are tutorials for how to do this and that in MSVC++. Meanwhile, here I am, with my connection to the internet, a copy of CodeWarrior and a distinct dislike for MSVC. Teaching myself the art of compiling was not what I had in mind for my break. My question is this (finally): Is there an easy way to write and compile SDL/OpenGL applications without all the hassle of beefed up IDEs and optimizing compilers? What I''m imagining is writing my program in a text editor, telling a command line compiler where the required headers, libraries, and DLLs can be found, and then compiling and running my program. So far, after going through various tutorials, forums, websites, etc, all I have found are a collection of tutorials discribing how to get SDL/OpenGL working in MSVC++, and a couple tutorials that tell you to download their package that has everything setup for you and not learn to do it yourself. The reason I want the basics is because I want to learn what all is going on when I compile my programs, I don''t want to just "trust" that everything is going as planned and that MS or a guy with a website set everything up correctly. MORE RANTING: Why can''t it be like ANSI C where you write you program and compile it. No fuss. No worries. If you have libraries you want to use just let the compiler know where to find them and all is well. Sorry, this turned into quite the rant. I am very frustrated with all the MSVC++ stuff I''m finding. Even the people that are against the usage of MS and Direct X/3D use MSVC++ to write their programs which is even more frustrating. -Micah the Frustrated

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
The best solution to your problem would be to quit using windows... it sucks (hardcore) and no self-respecting developer will use it in this day and age.

If you /really/ can''t switch to Linux,BSD,etc (and that would only be if you''re coding in a public library or school or something), then check out MinGW (http://www.mingw.org)... it''s a port of gcc for windoze... then you can compile at the command line like you wanted to. Also, if you really are going to use windoze, there are a couple of editors that are great for programming (ported from Linux): Emacs and Vim (search google, i don''t feel like looking up another url, hehe)

Share this post

Link to post
Share on other sites

Yeah, definetly stop using windows, but thats not the advice he wants, right?
Im using windows right now for myself and im using SDL/OpenGL

There is still loads of stuff I need to learn about OpenGL and SDL, but I got the bare bones working pretty soon.
Try looking under Nehe productions, he has an SDL basecode too.
It didnt work right away under Dev C++ (Uses mingw) but after some tweaking it did and its a lot easier than Direct X for sure.

Otherwise, try looking at the examples in on www.libsdl.org there is a part about opengl on the site with some of the Nehe tuts as well

Share this post

Link to post
Share on other sites
You have to do some weird stuff to get it to work in MSVC. As the first poster said: It''s easier with GCC, preferably in a unix shell; but, understandably, not everyone has that option . I''m not going to go so far as to say the "no self respecting developer part," since many do and will continue to in the future. Anyway, here''s what I do:
  • Setup a Win32 Application project.
  • Go to Project->Settings->Link.

    1. Check "Ignore all default libraries.
    2. Paste this into the Library/object modules text box: msvcrt.lib libc.lib libcp.lib kernel32.lib sdlmain.lib sdl.lib opengl32.lib glu32.lib

  • Remember that you must use int main(int argc, char *argv[]) as your entry point (don''t leave out the parameters). You may also need to prefix it with extern "C"
    If you have any problems, feel free to ask .

    [Resist Windows XP''s Invasive Production Activation Technology!]

    Share this post

    Link to post
    Share on other sites

    Is that for code warrior?
    With Dev C++ it will never work if you dont use the standard Windows entry point (winmain)
    Anyways, the reason I use SDL is to avoid the complications of Windows programming

    Share this post

    Link to post
    Share on other sites
    Maybe I''ll try getting it all working under linux. I''m not a big fan of windows by any extent and I guess it''s time I honed my Linux skills anyway.

    /me dreams of being able to create text file, write a program, compile it, and run it without having to spend hours before hand setting it up...


    Share this post

    Link to post
    Share on other sites
    search for kdevelop for kde (very simple to setup + compile)


    Share this post

    Link to post
    Share on other sites
    Just use the recently posted msvc++ sdl tutorial on gamedev.net and port it to work with codeworrier. All you have to do is point your project to link with the correct libs and then your good to go. And under linux it is even eaiser IMHO (especiall with debian and apt

    It is foolish for a wise man to be silent, but wise for a fool.


    All your Xbox base are belong to Nintendo.

    Share this post

    Link to post
    Share on other sites
    (MichahM -- you may want to skip past my rant below and get to the bottom, where I hope I've helped to answer your real question)

    No self respecting developer uses Windows?

    Are you off your rocker? Please list some game development houses (since this is a game development related forum) that don't use Windows as their primary (usually its their ONLY) development platform?

    I don't always like Microsoft as a company, because they do some underhanded things, but Visual C++ is by far the best C++ IDE I've used.

    For the record, I actually grew up using/programming for the C64, then Amigas and UNIX systems (owned a SPARC-2 running SunOS 4.x for a long time), I also do a lot of contractual server-based backend programming for Solaris, AIX, FreeBSD and Linux currently, with gcc, emacs, vi, KDevelop, etc, so I'm completely aware of what is available for those systems.

    Using Linux for political purposes (boycotting Microsoft) or for a free, high-quality server-side OS is fine and dandy, but IMO anyone who faults Microsoft for the quality of their OSes or desktop software is still living in 1998. Since Windows 2000 (and now XP) Microsoft makes solid OSes and great development tools, they also provide excellent development reference documentation for free.

    To get back on-topic:

    1) You can use MSVC++ from the command-line. Though the IDE hides this from the developer, MSVC++ has a full set of command-line tools which do the actual compiling and linking, just like UNIX based systems do. It also has a full make-like program (nmake) for those who like to do that stuff hands-on. So, you CAN use MSVC++ from the command-line...The only reason not to use it would be price (don't want to pay for it) or general dislike of Microsoft...In that case, there are alternate (free) compilers like the gcc ports, and Borland's free command-line version of C++ Builder:


    If you need more information on using SDL with these alternate free compilers, check the SDL FAQ:


    2) You mentioned you use CodeWarrior. As far as I know SDL ships with CodeWarrior project files, but they are Mac format. I have no idea if they are compatible with the Windows version of CodeWarrior and/or what changes you might need to make to make them compatible.

    Edited by - gmcbay on December 30, 2001 4:26:26 PM

    Share this post

    Link to post
    Share on other sites
    Click the last link in my profile sinature, and take tutorial number one. Ernest Pazera wrote a good tutorial on setting up SDL with MSVC++ under Windows.

    Next go here. Section 2.7 and 2.8 I believe are on using OpenGL and SDL together.

    Good luck. If you have any SDL problems post in the Cone3D forum.

    Simple DirectMedia Layer:
    Main Site - (www.libsdl.org)
    Cone3D Tutorials- (cone3D.gamedev.net)
    GameDev.net's Tutorials - (Here)

    Edited by - Drizzt DoUrden on December 30, 2001 4:30:49 PM

    Share this post

    Link to post
    Share on other sites

    • Advertisement
    • Advertisement
    • Popular Tags

    • Similar Content

      • By nOoNEE
        hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
      • By owenjr
        Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
        I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
        - Procedural multi-legged walking animation
        - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
      • By Lewa
        So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
        and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
        The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
        This is the tonemapping code:
        vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
        This is with the uncharted tonemapping:
        Which makes the image a lot darker.
        The shader code looks like this:
        void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
        But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
        To check this i plotted the tonemapping curve:
        You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
        My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
        For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
      • By nOoNEE
        i am reading this book : link
        in the OpenGL Rendering Pipeline section there is a picture like this: link
        but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
      • By Inbar_xz
        I'm using the OPENGL with eclipse+JOGL.
        My goal is to create movement of the camera and the player.
        I create main class, which create some box in 3D and hold 
        an object of PlayerAxis.
        I create PlayerAxis class which hold the axis of the player.
        If we want to move the camera, then in the main class I call to 
        the func "cameraMove"(from PlayerAxis) and it update the player axis.
        That's work good.
        The problem start if I move the camera on 2 axis, 
        for example if I move with the camera right(that's on the y axis)
        and then down(on the x axis) -
        in some point the move front is not to the front anymore..
        In order to move to the front, I do
        player.playerMoving(0, 0, 1);
        And I learn that in order to keep the front move, 
        I need to convert (0, 0, 1) to the player axis, and then add this.
        I think I dont do the convert right.. 
        I will be glad for help!

        Here is part of my PlayerAxis class:
        //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
        and in the main class i have this:
        public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
        finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
      • By Lewa
        So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
        Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
        And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
        (here is the full shader source code if someone wants to take a look at it)
        Now, i suspect that the normals are the culprit.
        vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
        Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
        So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
        //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
      • By HawkDeath
        I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
        Code: https://github.com/HawkDeath/shader/tree/test
        To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
        PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

      • By norman784
        I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
        The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      • By Hashbrown
        I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
        postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
        I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
        Another example of what I'm doing at the moment:
        1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
        2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
        Thanks all! 
      • By phil67rpg
        void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • Advertisement
    • Popular Now

    • Forum Statistics

      • Total Topics
      • Total Posts

    Important Information

    By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

    Participate in the game development conversation and more when you create an account on GameDev.net!

    Sign me up!