Archived

This topic is now archived and is closed to further replies.

OpenGL OpenGL and DirectX??

This topic is 5978 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''d like to know which of the 2 API is either better, easier to learn, better documented....and your personal opinion about them. I know that OpenGL can be used with any language, and DirectX is neither portable like OpenGL is nor can you use any language. But OpenGL only has the 3D graphics and stuff like DirectSound is missing. Is it right that you can implement DirectSound into OpenGL?? and is it still portable as without i.e DirectSound?? And again: which of the 2 API is either better, easier to learn, better documented...?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I started developing games about two years ago using Direct x 6.0
,simply because most of the professional software houses use the MS API
It is a matter of fact, useless to deny the evidence !.
A nightmare, I was about to quit with game programming.
Afterwards I shifted to Opengl .
In about 3 monthes time I was able to grasp the basic of 3D graphic programming.
Ok, my previous experience was not probably completely useless but no doubt that the Opengl architecture is much easier ( I do not mean better) than direct x.
The lack of sound and input support as well as of a custom 3d file format , same as .x file, are definitly drawbacks but you
can purchase a book such as "Opengl game programming " which integrate direct sound , direct input and md2 file format in the opengl based game engine.
I am concern about one point, only.
will the graphic cards manufacturers still support opengl in the future ? ( what about GeForce 3.0 and X box ?, for example)
I would appreciate to receive comments of other readers.


Share this post


Link to post
Share on other sites
I can't make it any clearer that this is my personal opinion, and I am not trying to start a flame war here. So for those of you who want to accuse me, know it was you who is wrong.

I find Direct X messy, and alot like windows code. It seems to be in sentence structure rather than programming strucutre, and it doesn't stand out as much as OpenGL. Also, the Direct X code itself looks like a error, so its hard to find an error inside of it(my opinions). I like OpenGL because it is to the point, clean, and usually is easy to spot in code. Not to mention, i dont like MicroSoft, so using Direct X would be aginst my moral code and princpals . That is my opnion, but I suggest you go tke a simple tutorial on both(the basics) and see whihc one suits you better.

PS: Just because I say it is messy, doesnt mean it really is, it is just that I look at it, and it seems messy. Once again, you should check both out.


"I've sparred with creatures from the nine hells themselves... I barely plan on breaking a sweat here, today."~Drizzt Do'Urden

Edited by - Drizzt DoUrden on September 4, 2001 2:57:23 PM

Share this post


Link to post
Share on other sites
>>I find Direct X messy, and alot like windows code<<

i remember years ago when i started on opengl my first thought was, why cant the win16/win32 api''s be designed like this ( i was doing a lot of windows programming at the time )

Share this post


Link to post
Share on other sites
OpenGL is better, it doesn''t need all these SDK downloading of several hundred mbs.

Just one tiny little header called glext.h needs to be downloaded for each new update.

Good luck M$, hehe.


The Game Industry
OpenGL/OpenAL/OpenNL

Share this post


Link to post
Share on other sites
Question: I see openNL/hawkNL and openAL listed here. But what open library can do input like DirectInput?

Now to my input. I have used both directX (D3D and DDraw) and openGL (a little) and I personally perfer OpenGL. It is probably because I do not like microsoft and I personally think openGL is a tiny bit eaiser to use, and also because I could get it to work with Dev-C++ while I could not (after much effort) get DirectX (7 and 8) to work with it.

Now the facts. OpenGL is cross platform DirectX is not. Meaning if you want to write for linux and windows use OGL but if you are just going for windows use OGL and DX.
OpenGL gets the newest features first. Through extentions you can use the newest features on video cards before you can with DX but your code may not work unless people have that card (correct me if i''m wrong). With DX pretty much all your code will work.

Well I could go on but I will advise along with these other people to try both out even thought we are making it sound like OGL is 10 times better. Still try both (unless you are pressed for time).
Oh one more thing OGL can be used with DirectInput, DirectPlay, DirectSound, and the like (only on windows of course).

Hope I helped.

Matthew
WebMaster
www.Matt-Land.com

Share this post


Link to post
Share on other sites
I''ve personally found Open GL to be a much better API to work with. It has is flaws. No input support and such but a person can use the good aspects of direct x for those things. Direct 3D gave me a headache when coding. I always came up with errors and had to rewrite a lot of code to get it to do what I wanted. I switched to Open GL a few months ago and have written a lot more code that actually works in a shorter period of time.

As for Anonymous''s concern. As far as I know every major video card manufacturer still plans on supporting Open GL. Nvidia has even released an OpenGL SDK specific for the GeForce 3 cards. And with version 1.3 of OpenGL coming out the future looks good.

As far as X-Box? I would give that a big thumbs down. Microsoft has pretty much based the whole system around programming with Direct X. At least that is what I''ve heard and someone may be able to enlighten me more. Personally why would I want to buy what is basically another PC modified to just play games. When I have an even better one right in front of me now.

Talk to everyone later.


Share this post


Link to post
Share on other sites
I can't say it better.

When I used VB, I used DirectX. Simply because it's the first language I crossed, and because it has all the features needed for games. Later I saw some OpenGL, but it was really a pain with VB.

When I moved to C++ for more speed and control, I started with DirectX too, and bought a book. Although I had a lot of experience with it using VB (it's almost the same, code's just a bit bigger now ) I still found it hard. Then I found NeHe. I immediatly started using OpenGL when I heard impressive games where made with it (Quake III, Half-Life) and I found it's much easier to use than DirectX. (only the setup code in the first lesson was hard, the rest of code is really self-explaining)

I'm creating an engine with it now, and it goes pretty well. The engine is better than all my VB / DirectX engines and everything I created with C++ / DirectX. I'm going to buy the OpenGL superbible 2nd edition, OpenGL game programming and a C++ book soon, to become an OpenGL master because this language is really powerfull and has a future (though it's much easier to use than DirectX)

Oh, and for Input and Sound, you can integrate it with DirectX easily, and the input code is not so hard (I don't know about sound really)

Good luck with it. I really advise you to use OpenGL

Edited by - Everquest on September 5, 2001 1:46:39 AM

Share this post


Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now