• Advertisement

Archived

This topic is now archived and is closed to further replies.

OpenGL Direct3D as opposed to OpenGL

This topic is 6438 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, I don''t want to start an argument over which one is better, but I am starting to learn 3d graphics programming and I''m wondering which one I should really go with. Thus, I have a couple very general questions: 1)What are the main differences? 2)Which one is supported by more graphics cards? 3)Which is easier to use for simple 3d game? (sort of an oxymoron) 4)Which is easier to use for more advanced 3d game? 5)Which allows for the most freedom and functionality? 6)If you were to talk to the programmers on all the newest games, which API would that say they are using? I realize that some (all) of these questions might be hard to answer, but any help would be appreciated. Please don''t turn this into a battle between OpenGL and Direct3D because I''m sure each one is best in its own way. I originally had the title "Direct3D vs.OpenGL" but that would definately provoke a fight.

Share this post


Link to post
Share on other sites
Advertisement
Before I begin, I just want to remind everybody that these are my opinions, so no flames or anything, please.

1. There's a lot of differences, both are great APIs, but the ease of use generally is considered to go to OpenGL. Also, {censored}, and OpenGL is cross-platformable (horrible word choice), while Direct3d only works on Windows.

2. I'm not sure about that, but I think it's about even. All major cards support OpenGL and DirectX, so that's not really an issue.

3 & 4. {censored}

5. I can't really answer this, as I haven't tried the latest Direct3d or anything close...

6. Depends which game you ask. Unreal Tournament supports them both (I know it supports OGL, and I'm pretty sure about DX), Quake3 and all id's games support OpenGL, Ion Storm's games support OpenGL. This may seem like I'm picking favorites, but add to this list, please.

So, I suggest you try them both, and then decide which YOU like more--not what somebody tells you that you like.

(to you Frag_Daddy_: I only censored your response because some of what you said will if not controlled start a flame war - WitchLord)

Edited by - WitchLord on June 22, 2000 7:20:34 AM

Share this post


Link to post
Share on other sites
Read the older messages in the forum. This question has been asked a million times before and you are just ASKING for a flame war, once again.


Give me one more medicated peaceful moment..
~ (V)^|) |<é!t|-| ~

Share this post


Link to post
Share on other sites
I will not even bother to answer anything that has been said here or will be said. But I will say this:

Even professional game developers cannot agree on which API is the best, why do you think that we will be able to do so? id Software uses OpenGL, because of that everyone that use their engines also use OpenGL (Ion Storm). Epic Megagames is mainly focusing on D3D even though they also support OpenGL to get those few cards that have better OpenGL support. I believe the Lithtech engine, developed by a Monolith offspring, uses Direct3D. Dynamix uses OpenGL for Tribes 2.

My suggestion to you is to try them both and deside for yourself which one you like best. If you choose to try out Direct3D make sure you do it with the help of D3DX as it will give you a completely different experience than without. You can read my tutorials for some beginner Direct3D with D3DX samples. (Of course, I also have to give a link to NeHe''s OpenGL tutorials )

---

For everyone that wishes to respond to this thread, bear in mind that I''m watching it carefully so we can for once avoid a flame war and try to for once have an informative discussion about the two APIs. I know that this will probably not going to happen so if I smell just a whiff of smoke coming I will immediately close the thread.

I can think of one rule that may be able to keep the flames down: Keep your personal opinions to yourself, and if you see any personal opinions ignore them.

I will remove personal opinions as soon as I see them, starting with Frag_Daddy_''s.

- WitchLord

Share this post


Link to post
Share on other sites
I''ll try to answer your questions in the right order.
I agree with MadKeithV post.

1. Differences in the design.
Direct3D is based on COM and its interface change @ each release requiring that you change a lot of code to use the new version.
OpenGL is a state machine, and require less calls to do the same thing as in Direct3D.

2. I think every card has drivers for both.

3. I prefer OpenGL, and I think it''ll be easiest. Especially for initialisation. Try both for a simple game, it''s the best way to know which one you prefer.
[Of course for cross platform dev you''ll use OpenGL]

4. I think both are equal.

5. I tend to say OpenGL is more ''free'', but Direct3D have more features.
[Notice that OpenGL being used by professional for years it has many features and supports many ''new'' features, that are in fact quite old for the professionnal market, like T&L]
There is nothing you can do with Direct3D7 you can''t with OpenGL1.2, and the invert also is true.
[Direct3D features that do not have OpenGL can be seen as MACRO, so you can write them yourself using OpenGL hardware acceleration]

6. I ''know'' they are using Direct3D and/or OpenGL.

Hope it helps and nothing will be censored.


So many people died for freedom that I don''t like people restraining mine or anyone else freedom.
BUT I know it''s sometimes required for safety.

-* So many things to do, so few time to spend *-

Share this post


Link to post
Share on other sites
Just a thought- Why doesn't someone put this ine the FAQ, if it isn't already there. I've seen about a billion of these posts. And about a flame war, who cares? Let the little girls fight, just don't read the shit. I'll start a flame war right now, {censored}

Seriously, we should put something on this topic in the FAQ, if it isn't there (I'm too lazy to look through it right now).

-BacksideSnap-

(Although I know you weren't serious about your comment I censored it because it would spawn unneccesary comments from others - WitchLord)

Edited by - WitchLord on June 22, 2000 2:08:08 PM

Share this post


Link to post
Share on other sites
I have thought about it, I know how to do it, I just don't know how to formulate it But I will put it in there someday, I have a couple of other things I want to put there as well but I haven't had the time.

(oops, I lied earlier I did answer to something posted here)

- WitchLord

Edited by - WitchLord on June 22, 2000 6:31:51 PM

Share this post


Link to post
Share on other sites
A comments to add...

Direct3D does support some new functionality that has just recently been introduced on consumer level cards (GeForce). Namely, hardware acceleration of vertex blending (as Microsoft calls it) and per-pixel shading. I must admit I havn''t used OpenGL for a long time, so could anybody else inform me as to whether these are supported in that API?

Share this post


Link to post
Share on other sites
quote:
Original post by Shinkage

A comments to add...
Direct3D does support some new functionality that has just recently been introduced on consumer level cards (GeForce). Namely, hardware acceleration of vertex blending (as Microsoft calls it) and per-pixel shading. I must admit I havn''t used OpenGL for a long time, so could anybody else inform me as to whether these are supported in that API?



Yes, they''re supported. They were supported even before DX actually, because of OpenGL''s extension mechanism: any hardware manufacturer can implement OpenGL extensions to their cards, without any approval of any other company. So nVidia implemented it in their first drivers, while they had to wait for Micro$oft to let them implement it in DX8.

Nicodemus.

----
"When everything goes well, something will go wrong." - Murphy

Share this post


Link to post
Share on other sites
I feel that one important thing is usually left out in discussions such as this: The source code dealing with advanced 3D techniques, like that accompanying academic papers will most likely be in OpenGL, often favouring GLUT. Whilst this is not a reason to use OpenGL in your own projects it is a reason why it is advantageous to know how to use it.



-- Kazan - Fire Mountain Games --

Share this post


Link to post
Share on other sites
If you have some really complex and new 3d techniques you can''t use them with OpenGL or D3D. You should write your own software engine. Then if you want to give it hardware acceleration as an option, choose one or the other. However, to do input and sound you will need to learn some directX, and D3D blends in with them, but OGL doesn''t. D3DX is quite a good utility library, and makes D3D much easier than it used to be.

Share this post


Link to post
Share on other sites
I''ll agree with that last part of that last post. D3D used to be HELL to use. Actually, Brian Hook titled his D3D tutorial "The Hell of Direct 3D". That was DX 3.0 though, and things have changed quite a bit. Still, upon hearing anything that remotely sounds like "execute buffer" I run and hide.

-BacksideSnap-

Share this post


Link to post
Share on other sites
WitchLord,
Here''s my two cents about questions like this one.
Since you are the moderator, the voice of reason, and have your own web site with ever informative articles, I say merge them.

What I mean is this. When ever questions like this (ones that get asked about once a week) start, do the following.

1) Create a few ultra comprehensive articles about the subject.
This is a one time effort. Others might offer suggestions/changes to the documents via this forum.

2) Post a link to the document within a response.
Others can also post a link to the document.

3) Close the post.

After the document is created (in this case OpenGL vs. D3D) the process will be very simple.
From then on, you won''t have to spend much time at all on these continual postings.

Just my $.02

Share this post


Link to post
Share on other sites
Very good suggestion, it was actually this very reason why I started writing my tutorials. However, I feel that I''m not experienced enough with OpenGL that I''m able to do a just comparison between OGL and D3D, so I hesitate to write such an article.

- WitchLord

Share this post


Link to post
Share on other sites
Hey Witchlord don´t be too hard with these people.Why do you censor their posts?I personally don´t want to take part again in such a fu(king flame-war but I think you have two options:
1.Close the thread!
2.Let it go how it goes,without censorship!

I personally am for 2., ´cause it´s always funny to read these post and nobody takes harm from it if he doesn´t participate in the thread!AND sometimes different opinions HAVE to be discussed(although they have been discussed hundreds of housands of times before on this topic....)

Greets,XBTC!

Share this post


Link to post
Share on other sites
1)What are the main differences?
OpenGL is rather crap in software mode, while directx has very good software emulation. OpenGL can produce nicer results in 3D, but DirectX has better 2D support (DirectDraw)

2)Which one is supported by more graphics cards?
DirectX. Only the newer graphics cards like the GeForce1 and 2 have full OpenGL support

3)Which is easier to use for simple 3d game? (sort of an oxymoron)
2D: DirectX (DirectDraw)
3D: OpenGL (less code required)

4)Which is easier to use for more advanced 3d game?
2D: DirectDraw
3D: Either - DX and D3D inparticular has caught up with some of OGL''s advanced features, such as stenciling, but lots of code is required. With OpenGL it''s just straight-forward function calls.

5)Which allows for the most freedom and functionality?
I have to say OpenGL in this one, as it''s so customisable and easy to get to grips with. Also, GLUT (GL Utility Toolkit) allows you to create OpenGL apps without worrying about loads of windows code (WINPROC, WINMAIN, etc) - it does all that bit for you.

6)If you were to talk to the programmers on all the newest games, which API would that say they are using?
I would say OpenGL, mainly because I only take notice about what goes on in the OGL world. There are probably hundreds of developers used DX, but that doesn''t bother me .

If you a beginner, then do a little experimenting with both. If you want to get down and achieve some stunning results instantly, then use OGL. If you want lots of spegetii code and stunning results, use DirectX.

You also have to think about the type of game you''re doing. If it needs graphics like QuakeIII then use OpenGL as some of the stuff still isn''t in DX. If you want graphics lile QuakeII, use DX or OpenGL.

That''s my one cence (not two, most of my comments arn''t worth that much)

MENTAL

Share this post


Link to post
Share on other sites
The graphics in EverQuest are rendered using D3D7...
I've actually never played quake I, II, or III (I can however beat DOOM2 with my eyes closed) so I don't know how they compare... The eye candy in Everquest is quite nice, though there isn't anything spectacular... but that probably has very little to do with the API's used.

From what i gather thus far, OpenGL is a much more mature API than DirectX, though I do not know DirectX well. I haven't ever used Glide or Genesis.

I was rather dissappointed at first thinking that I had to write my own matrix mult & stack & such - I just discover d3dx though


Edited by - Magmai Kai Holmlor on July 3, 2000 3:38:42 AM

Edited by - Magmai Kai Holmlor on July 3, 2000 3:44:28 AM

Share this post


Link to post
Share on other sites
WitchLord, don''t be afraid of making a mistake in the writing of such an article.

It will be a living document.
Anything that is wrong, will eventually be corrected by someone on this board.
Even if your first attempt is only 80 percent truth, that is much better than the 60 percent misinformation that these posts have a tendency to create.

A person that is new to directx or opengl, will have a source of info that is unbiased(very important.) It will help them make a choice (even if the choice is to use both) based upon mostly factual information.
So to sumerise, you can give a first version of a document that is mostly factual and is unbiased. Others can offer suggestions on ways to make the document better. Time well spent now, saves much time later (kind of like a good design before the programming effort!)

Share this post


Link to post
Share on other sites
Magmai Kai Holmlor:

"The graphics in EverQuest are rendered using D3D7..."

I don''t believe this to be true. Direct3d rendering is SUPPORTED, but the origional design was for OpenGL. Try installing OGL Drivers for your video card and change the device in EQ to OpenGL. It looks a bit smoother and nicer (they implimented a few more lighting/texturing features for GL than they did for DX).

Note, however, that both are supported fairly interchangably. This is (sadly, some may say) the current trend of 3d graphics: Two separate architectures, Direct3D and OpenGL, both with mostly the same features at a high level of implimentation, being implimented and used interchangably. Doubles the programmers work-load, to a small degree, but hey, it works.

I have dabbled in both DirectX and OpenGL, and found them both to be a little intimidating at first, but pretty much equally viable. The vector-sum of both systems is about the same. However, while OpenGL is a bit easier than DirectX to use in the general 3d category (at least, to start out), it should be noted that some people (at least myself, so far) might have a little trouble getting OGL, specifically Windows OpenGL (wgl or "Wiggle") impliment as cleanly in a windows environment.

In the end, it is the programmers choice. If you plan on getting a job in the industry, my only advice would be this: Learn both =)

(How''s that for conciliatory? hehe )


-- Nathan Hoobler

Share this post


Link to post
Share on other sites
I have seen some requests for a document that compares OpenGL to Direct3D but I have decided that I will not write any document like that. I would rather spend my time programming or write tutorials. However if someone else feels like doing it I will gladly help by giving my "expert" opinions on Direct3D. I will also give a link to the document in the forum FAQ.



- WitchLord

Share this post


Link to post
Share on other sites

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
    • By Andrey OGL_D3D
      Hi all!
      I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).
      The effect contains the following passes:
      1) Depth scene pass;
      2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.
      3) Shafts pass texture +  RGBA BackBuffer texture.
      Shafts shader for 2 pass:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif void main(void) {     vec2 uv = tex;     float sceneDepth = texture2D(DepthSampler, uv.xy).r;     vec4  scene        = texture2D(FullSampler, tex);     float fShaftsMask     = (1.0 - sceneDepth);     gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4    ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate  float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) {   vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b);   vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b);       // TODO: To look in Crysis what it the shit???   //return ( b < 0.5 )? c : d;   return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) {     vec4 sun_pos = Sun_pos;     vec2    sunPosProj = sun_pos.xy;     //float    sign = sun_pos.w;     float    sign = 1.0;     vec2    sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5));     float    sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y ));     sunVec *= ShaftParams.x * sign;     vec4 accum;     vec2 tc = Tex_UV.xy;     tc += sunVec;     accum = texture2D(BlurSampler, tc);     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.875;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.75;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.625;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.5;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.375;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.25;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.125;     accum  *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0);           accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9));     vec4    cScreen = texture2D(FullSampler, Tex_UV.xy);           vec4    cSunShafts = accum;     float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0;              float fBlend = cSunShafts.w;     vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0);     accum =  cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen);     accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5);     gl_FragColor = accum; } Demo project:
      Demo Project
      Shaders for postprocess Shaders/SunShaft/
      What i do wrong ?
      Thanks!
       


  • Advertisement