This topic is now archived and is closed to further replies.

OpenGL OpenGL32 - what is it, where do I get it, and is it worth it?

This topic is 5958 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I realize this question might be slightly newbie-ish for this forum. If I offend anyone, I apologize. I am a college grad who has used SGI''s OpenGL on IRIX machines, as well as the wimpy GLUT for beginners on Linux. Of course, I''d used these for classes, and game development is an entirely different beast. From my brief use of Direct3D, I understand that Direct3D is much lower level, and OpenGL is (sort of) more like a Direct3D retained mode implementation, except MUCH better and faster so it can be used in games. What exactly is OpenGL32? Does it have a downloadable SDK? Is it a good idea to make a preliminary game engine in OpenGL32 as opposed to Direct3D? The reason I ask this is that I''m not really aware of any games since Quake3 that run in OpenGL; nowadays everything seems to be Direct3D. Thanks for taking time to read this guys. **************************************** A Linux user... by decision, by destiny, and by doom.

Share this post

Link to post
Share on other sites
Games aren''t made in OpenGL? Anyway: why not use it? It''s just as fast (even a little faster than Direct3D on NVidia based cards, since NVidia''s OpenGL drivers are extremely well written), cross-platform, easy to use, et cetera. You shouldn''t need to download anything to use OpenGL, it comes with just about every compiler (on a 32bit machine OpenGL is OpenGL32). In Windows only 1.1 is supported, but you can access features in 1.3+ through extensions (they''re discussing 2.0 and simply ignoring Microsoft and releasing their own libraries and DLL''s, if you read the notes). NVidia has an SDK for it, but it isn''t needed. The last 7 rewrites of my engine have used OpenGL, and I haven''t found a reason to stop yet .

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post

Link to post
Share on other sites
opengl + d3d have about the same level of abstraction from the hardware. though with opengl u have the option of hardware specific extensions ie which enable u to extract the full power speed from a card. (something that d3d lacks)

Quake 3: Arena; Serious Sam; Baldur''s Gate 2; Tribes 2; Alice; Descent 3; Homeworld; Soldier of Fortune;

Share this post

Link to post
Share on other sites
I don''t doubt there are lots of games, especially those based off of ID''s action game engines (e.g. Quakes, Half-life, etc.) that use OpenGL. Come to think of it, OpenGL is most popular in action game engines. I wonder what Warcraft3, Independence War 2, Black and White, and other more recent non-action games use... I''d have to go look it up.

I could be totally wrong about this, but I have heard that Direct3D is slightly better at handling lots of polygons and textures (depending on its implementation with the video cards), so even though it is slightly slower than OpenGL, it is better for making detailed games with less emphasis on frame-rate. Any ideas about that?

Other than that, I''m pretty much anti-Microsoft. I''d sooner use a product that is compatible with everything (and I actually have a little OpenGL experience to boot). Still, making a game engine is a big time investment, and I don''t want to have to start from scratch any more than necessary!

Share this post

Link to post
Share on other sites
OpenGL has a nice book and a good website you can learn off of.

A god name Jeff Molofee created this site:

And the book is "OpenGL Game Programming"

Direct X has alot more books that are pretty good, and thats pretty depressing because I think OpenGL is alot better ( but I also didnt use DX8, I used 7 so things may be different... )

If you really want to learn Direct X, go to and check out thier Direct X books, they are the best out there..

Share this post

Link to post
Share on other sites
whatever number polycounts/texture u can push is all dependant on the card, theres no difference in the api''s in this respect.
the problem with opengl is there used to be shitty pc drivers for it from a lot of companies but since the advent of the quake series q2/q3a things have improved heaps, cause a lot of cards get judged in reviews by how fast can they do q3a. I expect this is gonna continue in the future with the release of doom3 + quake4 (both opengl only).

personally if i was u + were wanting to make a game.
ild look at using an already existing engine eg the tribes2 one or make a mod for an existing game eg halflife or q3a.
these all support opengl + d3d (except q3a) itll save u a lot of time + let u realise your goal much sooner.

Share this post

Link to post
Share on other sites
Thanks for those tips. You''re definitely right about lack of OpenGL books. It''s sort of unfortunate that OpenGL seems to be viewed just as an educational tool for students in 3D graphics, and as far as game development is concerned no one wants to even suggest that it''s possible to create [gasp] GAMES with OpenGL. On the other hand, all those Direct3D books have "games games games!" in their titles, and many of them do an admirable job showing you how to set up a good game loop as well as throw together Direct3D code.

As a non-developer, I picked up a copy of "The Zen of Direct3D Game Programming" by Peter Walsh. I was not disappointed. Unlike any of those puny college textbooks, or even good OpenGL resources like the OpenGL superbible, it is very specific in its goal: to teach you to make games.

So far, in my brief jaunt with OpenGL gaming, I''ve checked out some lessons at:

Share this post

Link to post
Share on other sites

Thanks for the idea. I thought of picking up and modifying a pre-made engine. Unfortunately, that requires making a big money deal with ID Software, Epic, or some gaming behemoth. Also, the kind of game I''m interested in making really is not an action game. Suffice to say, the best kind of game engine would be one capable of powering Black&White, Sacrifice, and a true-3D SimCity at the same time. Even the most versatile form of the Quake 2 or 3 engine would probably not be right for this game.

So basically, I''ll need to go all out for this! Time to make an engine!

Share this post

Link to post
Share on other sites
If you are looking for an existing engine to mess around with, visit They are licensing the engine used to power Tribes 2 for $100 with restrictions on how you can use it. But I think this could be very good for a pure learning purpose.

Edited by - Pabu on September 25, 2001 3:54:48 AM

Share this post

Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
      #define NUM_LIGHTS 2
      struct Light
          vec3 position;
          vec3 diffuse;
          float attenuation;
      uniform Light Lights[NUM_LIGHTS];
    • By pr033r
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article ( inspirate from another code (here: but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch):
      Exe file (if you want to look) and models folder (for those who will download the sources):
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ...
      EDIT: Depth texture attachment:
  • Popular Now