Sign in to follow this  
GMX

OpenGL OpenGL vs. DirectX

Recommended Posts

I've heard different things when wondering whether to use DirectX for graphical programs or to use OpenGL. Some people have said that DirectX is more powerful/better but harder to use, is this true?

Share this post


Link to post
Share on other sites
This topic again?

http://www.google.com/search?q=opengl+vs+directx

and

http://tinyurl.com/3x6m7e (local search)

Search is a wonderful thing.

Summary? It is up to you, they can both more or less do the same things, and they do them in similar ways for the most part.

Share this post


Link to post
Share on other sites
This question springs up a lot and tends to lead to pointless arguments between different camps.

Firstly I'd better point out the more accurate comparison is between OpenGL and Direct3D. DirectX controls a lot more than just the graphics. That isn't necessarily a strike against OpenGL though, as with libraries such as SDL you can get a similar grade of functionality for most games.

I'm not an expert, but I've read a few of these debates. Basically I think the argument comes down to this:
  • If you are aiming for cross-platform games, use OpenGL
  • If you prefer C style procedural programming then OpenGL is more your flavour. If you prefer a more object oriented approach you might prefer Direct3D.
  • OpenGL may be slightly easier to get started in and draw things to the screen, but when you get deep into it then it's roughly the same.
  • If you are undecided about any of these and are developing for Windows it probably doesn't matter which one you choose. Once you are familiar with either OpenGL or Direct3D it's pretty easy to learn the other one.
  • Ideally once you get into a large project you'd write the code so it's not that dependent on what graphical interface you use, so you can easily swap between OpenGL or Direct3D or give your players the choice between the two

Share this post


Link to post
Share on other sites
I've worked with both, and OpenGL is a LOT easier to understand and start with. Direct3D is cool when you get it to work, but you got a lot to do before you get it to work like what you want.

But, Direct3D is a lot more powerful than OpenGL. I rarely see professional games in OpenGL, though OpenGL can be used in Linux as well.

I'm a very C style programmer, and only use OOP features when necessary. I also use Dev-C++, which is Linux based, instead of VC++, and OpenGL is the practially the only thing that works on it. This makes me exclusively OpenGL.

Now, I once tried Direct3D with VC++ and was impressed at what it can do, but ultimately I will never go back. It has too much overhead for me, and that constricts me too much. I'm just a 'weekend game programmer', so I'm not looking for flashy cool things. :)

Anyways, I'd say go with OpenGL, but if you want a real professional game, you must learn Direct3D someday. ;)

Share this post


Link to post
Share on other sites
hmmm... you signed up to post this, the most commonly asked question of all time. ( I think )

Troll?

Well, in any case, they're both pretty solid. Here's the first result from Google, that site has been known to have excellent articles. Personally I think its best to support both. There are so many cards and so many configurations out there, that who knows what kind of drivers / system setup your end-user will have. If you use an engine like OGRE, it will already include nearly identical support for both DirectX & OpenGL, so that your user can choose which to use and you can focus on other things.

Share this post


Link to post
Share on other sites
Quote:
Original post by sykosnakesix
I've worked with both, and OpenGL is a LOT easier to understand and start with. Direct3D is cool when you get it to work, but you got a lot to do before you get it to work like what you want.

But, Direct3D is a lot more powerful than OpenGL. I rarely see professional games in OpenGL, though OpenGL can be used in Linux as well.

I'm a very C style programmer, and only use OOP features when necessary. I also use Dev-C++, which is Linux based, instead of VC++, and OpenGL is the practially the only thing that works on it. This makes me exclusively OpenGL.

Now, I once tried Direct3D with VC++ and was impressed at what it can do, but ultimately I will never go back. It has too much overhead for me, and that constricts me too much. I'm just a 'weekend game programmer', so I'm not looking for flashy cool things. :)

Anyways, I'd say go with OpenGL, but if you want a real professional game, you must learn Direct3D someday. ;)


Odd, i thought doom3 was a professional game.

Also i still havn't seen a single feature in D3D that isn't avaliable in OpenGL. For any platform except Vista OpenGL is the only API that provides full access to the latest hardware. (And it will continue to be the case unless microsoft changes their mind about backporting DX10 to older windows versions , or updating D3D9 yet again.)

Ofcourse you might be refering to something else when you talk about power.

Share this post


Link to post
Share on other sites
I think the previous messages give some good advice. For whatever this is worth, I learned DirectX when I took a contract to create the game engine (and the guts of the first game) for a commercial game development company. So I guess it is reasonable to say I understood DirectX pretty well by the time the game was complete and released to market.

A few months ago I had to create the guts of a new 3D engine from scratch, which had to be based upon OpenGL for portability reason. Having gone through a painful learning curve with brand X, I was not thrilled to repeat the learning curve again with brand Y. Anyway, seeing your post made me think my experience might be helpful to your question. But remember, we all have different tastes and reasons for our preferences, so possibly your experience could be opposite mine.

The first week or so was definitely getting used to a new approach to 3D, which is to say, the way they work *is* different - or about as different as they can be given they end up being almost identical, functionally.

However, after the first week, everything thereafter was a major relief! It was like having a thorn removed after you kind of got used to the pain. For me, the difference is that significant. While I know some developers out there might be able to have exactly the opposite experience, it is actually difficult for me to believe it deep inside. Yet I know tastes *are* often that different.

Specifically, when working with the OpenGL API, I find my thinking process is about 3D, graphics, the architecture of my application, things like that. When I work with the DirectX API, I find my thinking process is often about finding or remembering or looking-up arbitrary details - the "how do I get done what I want). It feels like one extra unnecessary level of information to remember and cope with --- which almost seems to vanish with OpenGL.

That was my experience. Depending on your goals and your own preferences (likes and dislikes), your experience could be different, even opposite I suppose (even if that is difficult to imagine from inside my skull).

Oh, one important difference (mentioned by others) is not a matter of taste. If you think you'll ever be creating 3D applications for other platforms, you might want to choose OpenGL now - because then you won't need to subject yourself to a learning curve repeatedly like I did. Actually, it was three times for me (because the first 3D engine I wrote was 100% in my own code with no "helper" API beneath - just linear memory mapped onto the screen).

Good luck.

Share this post


Link to post
Share on other sites
If you are definately going to do Windown, Linux and OSX versions then OpenGL uis probably the easier route. Note the definately in that sentance, some dream about maybe one day along the line doing it shouldn't be a consideration imo.

That aside, honestly, give both a try and see which you prefer.
About the only real advantage D3D has right now over OpenGL for the new user is the SDK and related docs, they are pretty good and easy to get hold of.

One more consideration to throw into the mix; OpenGL is changing and soon.

I'm 99% sure that June will see the release of the new spec for OpenGL Longs Peak, this is a major change and while the old OpenGL2.x style programming will of course still work you might want to consider that you'll have to learn a reasonably different set of API functions afterwards to get the best from it.

I'm also given to believe there are some differences between D3D9 and D3D10 as well which means the same more or less applies there; however don't take my word on this as a definative answer as I'm a D3D9 newbie myself [grin]

Quote:

Original post by sykosnakesix
But, Direct3D is a lot more powerful than OpenGL.


This, is wrong.
Both OGL and D3D talk to the same hardware via the same drivers; effectively what one can do the other can do (there are small differences).

The reason why you see less OpenGL based commerical games is due to other factors; mostly based in what was going on in the OGL world a couple of years back based on the speed of OGL change and unresponsiveness of the ARB to change leading to what many termed 'extension hell' where vendor specific paths had to be taken to have access to the latest and greatest features. This has somewhat been fixed in the last couple of years.

As for games which use it; well anything based on the Quake or Doom engine series is ofcourse OpenGL based and I was surprised to find Homeworld 2 was (I knew Homeworld was), however you'd be right saying that many new commerical titles don't use it on Windows at least (WoW for example uses it on the Mac).

Share this post


Link to post
Share on other sites
I've not worked extensively with D3D on Win32 platforms, but I have on both Xbox and Xbox 360. My impression of it is that it has a C++ interface, but it's not exactly accurate to call it object oriented. It's really almost as procedural as OpenGL in practice.

The capability comparisons are mostly related to your platform and driver combo than the APIs themselves.

OpenGL's biggest strength and weakness is the "Open" part. It means that hardware manufacturers can add support for new features via extensions without the intervention of another company, but it also has historically meant that things can go for long periods of time without being standardized... leaving you in a situation where you have to implement the same feature in 2-3 different ways (using manufacturer extensions) because nothing has been standarized yet.

I don't know if this is a huge problem today, but it was a pretty big annoyance last time I did OpenGL development (shortly before OpenGL 2.0 was standarized).

So yeah, it really doesn't come down to one API being intrinsically better than the other. It comes down to which you prefer and which runs on the platforms you are targetting. (For all its portability, OpenGL isn't supported on a few significant commercial platforms.)

But this is always framed as a "versus" debate. So I'm going to make the audacious suggestion of learning and using both APIs.

Share this post


Link to post
Share on other sites
Quote:
Original post by exwonder
I've not worked extensively with D3D on Win32 platforms, but I have on both Xbox and Xbox 360. My impression of it is that it has a C++ interface, but it's not exactly accurate to call it object oriented. It's really almost as procedural as OpenGL in practice.

The capability comparisons are mostly related to your platform and driver combo than the APIs themselves.

OpenGL's biggest strength and weakness is the "Open" part. It means that hardware manufacturers can add support for new features via extensions without the intervention of another company, but it also has historically meant that things can go for long periods of time without being standardized... leaving you in a situation where you have to implement the same feature in 2-3 different ways (using manufacturer extensions) because nothing has been standarized yet.

I don't know if this is a huge problem today, but it was a pretty big annoyance last time I did OpenGL development (shortly before OpenGL 2.0 was standarized).

So yeah, it really doesn't come down to one API being intrinsically better than the other. It comes down to which you prefer and which runs on the platforms you are targetting. (For all its portability, OpenGL isn't supported on a few significant commercial platforms.)

But this is always framed as a "versus" debate. So I'm going to make the audacious suggestion of learning and using both APIs.


It is a bit better today, but i would still claim that OpenGLs main strengths are extensions and portability, and its greatest weakness is extensions.

Currently we can play with the G80 using nvidias extensions. (We could do that a few months before D3D10 was released even) , ATI/AMD doesn't seem to have any info regarding extensions for the r600 yet though. Thus we cannot know how big a problem it will be for this generation of cards.

The uncertainty is the biggest problem right now imo.
If i write a game in D3D10 i know it will run well on ATI:s next card without any modifications. If i write it in OpenGL using the new nvidia extensions i will possibly have to release a patch to get the same features on ATI hardware.

Ofcourse if i write it in OpenGL i get those features working in win2k/XP, aswell as non microsoft systems, but the number of users with G80:s running those systems are easily counted. (Mostly OpenGL developers who like to play with new shiny features as most gamers belive you need Vista to even use a G80)

Share this post


Link to post
Share on other sites
I Thought i should say something about this.

if u would like to learn the same language Down from scratch every 2 year then go for DirectX.

I Learnt the DX7 by my own harwork without any knowledge of graphics programming. then DX8 was realeased and after many days of banging my brain i found Directdraw wasn't there. then i started learning the sprite engine in DX8. and suddenly i came to know there is DX9 with Directdraw. i left my foot and ran for it. by the time i got enough time to get that whacky over 300 MB SDK from internet from my buzy life i found that the DX10 is released and then i again banged my head and stoped chasing MS. i found NEHE and learnt good things about OpenGL. since then no release of DX excites me and i will never think of using DX again. who knows by the time i learn to manage DX10 there will be DX15 in the market.

Share this post


Link to post
Share on other sites
This question has been asked too many times already. Really; there should be a sticky for this [smile].

Quote:
DirectX is more powerful/better but harder to use, is this true?

The answer is no. Both graphics APIs are equally difficult to learn. I suggest you learn one (any one) and the other can be picked up very easily. Under the hood both APIs have equivalent functionality. They are more similar than you think. I learnt OpenGL before Direct3D and when it came to picking up Direct3D, it seemed pretty easy to pick up. I don't understand what the fuss is about. Get your 3D/2D concepts clear by learning one. That I think is the most important thing to learning any of the two APIs.

I agree with exwonder, there is no versus debate. Learn one, then pick up the other.

[EDIT]Yup... there is a whole article on this. Almost forgot about that.

[Edited by - _neutrin0_ on April 10, 2007 2:49:46 AM]

Share this post


Link to post
Share on other sites
I find OpenGL to be a repulsive, badly designed mess. Others disagree. In the end, it doesn't matter. Any vaguely competent graphics programmer will end up being fluent in both. I think OGL is superficially easier to start with, but D3D becomes much easier once you decide to do anything significant (texturing, for example).

Quote:
I also use Dev-C++, which is Linux based, instead of VC++
Dev-C++ isn't "Linux based". It uses GCC as its compiler, which has nothing to do with Linux. It's also a piece of junk that should be deleted and replaced immediately.

Share this post


Link to post
Share on other sites
Quote:
Original post by sykosnakesix
I rarely see professional games in OpenGL, though OpenGL can be used in Linux as well.


All playstation games are made with opengl (probably nintendo consoles too)

Quote:

It's also a piece of junk that should be deleted and replaced immediately.


What's wrong with Dev c++? I've tried the MS ide's and they're horrible (stdfix.h?).

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
I find OpenGL to be a repulsive, badly designed mess. Others disagree. In the end, it doesn't matter. Any vaguely competent graphics programmer will end up being fluent in both. I think OGL is superficially easier to start with, but D3D becomes much easier once you decide to do anything significant (texturing, for example).


Agreed. When OGL 2.0 was around the corner, I was hoping that the situation would radically improve. It did not. For example, FBOs still don't work with anti-aliasing (EXT_framebuffer_multisample is *still* not publically available). Now, when I complain, people tell me that OGL 3.0 will fix those issues.. sorry, but I'm tired of waiting. Unfortunately I have too much code to switch to D3D9/D3D10, so for the moment I'm sticking to OGL..

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by Death100
Quote:
Original post by sykosnakesix
I rarely see professional games in OpenGL, though OpenGL can be used in Linux as well.


All playstation games are made with opengl (probably nintendo consoles too)
Only PS3 games, and OGL|ES, not OGL. No other console uses OGL, though some of the APIs resemble it.

Quote:

What's wrong with Dev c++? I've tried the MS ide's and they're horrible (stdfix.h?).
Please. You don't even know enough about VC to remember the name of the thing you're trying to complain about.

Share this post


Link to post
Share on other sites
I can't compare OpenGL with DirectX since I never really used DirectX. But does it really makes a difference?

Nowadays most of the special effects and optimizing, heavily depends on the video-card hardware (shaders, using VBO's, and so on). 5 years ago you needed much knowledge about your API to make nice special effects (making water, reflections, and so on), but now you can do most of the work inside the shader code. Unless you choose GLSL or <what's the equivalent for DX?>, this won't have much to do with the API you're using.

OpenGL/DirectX just passes the data, and provides some buffers/textures. Maybe that's not 100% true, but I certainly feel most of the focus now lies on shaders and the video-card hardware, instead of the more advanced functions inside OpenGL or DirectX. Of course, you still need to know how to make a VBO or FBO in both API's when doing the more advanced stuff, but that has not much to do with performance, only the coding style you prefer.

greetings,
Rick

Share this post


Link to post
Share on other sites
For those of you that know OpenGL very well, is there a way to share OpenGL resources like textures, vertex buffers, etc. between multiple processes like I can with D3D9Ex?

I don't want to have to create and initialize the same resource across multiple processes and would rather do it once and just pass a handle around.

Can I do this sort of thing when using OpenGL?

Share this post


Link to post
Share on other sites
Quote:
Original post by don
For those of you that know OpenGL very well, is there a way to share OpenGL resources like textures, vertex buffers, etc. between multiple processes like I can with D3D9Ex?

Yes, i haven't used it myself but i know it's there.

Share this post


Link to post
Share on other sites
Honestly, I'm new at most of this. I've read many of Nehe's tutorials so I can do some stuff in OpenGL if I set my mind on it, but I really haven't actually finished anything. I just don't want to spend the time to learn one of them, only to find out that the other is better, etc. So what I understand is they're both basically the same in difficulty and functionality in the long run. But as for Dev-C++ it is the compiler I am using as well. Does someone have a suggestion for a better one?

Share this post


Link to post
Share on other sites
Quote:
Original post by GMX
But as for Dev-C++ it is the compiler I am using as well. Does someone have a suggestion for a better one?

Visual Studio 2005 Standard Edition, or if you can't or won't get that, then the Express Edition. I've been using the Standard Edition I got for free last year, and I definitely enjoy it. Perhaps it takes a bit of time to get use to it, but I've been using various versions of Visual Studio for a long time, so VS2005 was simply similar but better.

It sort of reminds me of a similar thing with OGL/D3D, where with OGL it takes very little code and comprehension to get something on the screen, but it takes a bit more code and comprehension to do the same with D3D. Similarly, some IDEs might make it easier to quickly build a very basic program from a single source file, but VS2005 is very nice when working with any project of a more command and realistic size.

Share this post


Link to post
Share on other sites
One thing to be aware of. Video card manufacturers tend to focus on their Direct3D drivers before their OpenGL ones. Not a problem with ATI or NVIDIA, but I've had the dubious honour of trying to develop a modern OpenGL renderer on Intel accelerators of which there are a hell of a lot of out there. Their OpenGL implementation is bad.

They only support OpenGL 1.4 so no GLSL despite having pixel shader 2.0 hardware. Non-power-of-two textures are not supported (yet are in their Direct3D drivers). Render-to-texture is buggy and uses the old pbuffer system. Using auto-mipmapping caused driver crashes. And this from their latest drivers as of two months ago.

Unless you're planning on cross-platform support, just stick with Direct3D. As much as I love OpenGL, its poor support and its terrible render-to-texture system that has only just been fixed but not before holding back the API for years (something over which John Carmack once said almost made him switch to D3D), simply make it hard for me to recommend for a big project.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628301
    • Total Posts
      2981906
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now