Sign in to follow this  
mickliddy

OpenGL SlimDX, MDX or OGL?

Recommended Posts

I had a look around the forums, so forgive me if Ive missed something about this already. Ive been programing in C# for the past year or so, and Ive started looking at trying my hand at building a game. However, I'm not exactly sure what which way to go. Ive heard alot of people say to use C++ instead of C#. But I already know a decent amount of C# and am comfortable with it. So, should I stick with C# or try C++? Should I try and use the OpenGL, DirectX/MDX or SlimDX?

Share this post


Link to post
Share on other sites
Take my words with a grain of salt, as I'm a C++ DirectX programmer (before using DX I was on OGL side), so I'm not into MDX/SlimDX, etc.

As for OpenGL/DirectX you can find many threads here. In the end it's up to you.

AFAIK MDX is dead, so I wouldn't suggest you to use it. SlimDX could be a better option but if you already know C#, I suggest you to have a look at XNA.

Share this post


Link to post
Share on other sites
i wonder why you say he should take a look at xna. Xna isn't exactually an industry standard way of working, it is a premade sort of engine in which a lot is not possible because it is based on development for the 360, and secondary for the PC.

so i do not agree that XNA is better if you already have knowledge of C# over SlimDX, i'd rather take the opposite. If you have little knowledge of C# XNa is great to get lots done in notime, but SlimDX gives you way more freedom because it has nothing premade, except the fact that you can do all the DX calls like is done in C++.

but thats my word.

Share this post


Link to post
Share on other sites
I'd suggest sticking with C# and learning XNA too.

C# is perfectly fine for game development, although it's limited to Microsoft platforms. C++ is still the main language used in the industry, but it's best to learn one thing at a time and learn XNA, then look at C++ if you want.

And if you're a C# person, you'll most likely be comfortable with XNA/DirectX than with OpenGL.

Share this post


Link to post
Share on other sites
I myself am a .NET developer, but I've been programming with C++ for years as well, so I can give you some advice. Your productivity will increase a lot if you use C# instead of C++. Some reasons to do so:
1) First, and most important of all, the .NET framework. In C++ you need a threading library, a sound library, a graphics library, etc... Plus, boost sucks. It just tries to emulate things that C++ wasn't meant for. In .NET, however, everything is there for free - just name it: garbage collector, network library on top of sockets (plus them if you need them), continual improvement, ridiculously easy multithreading and synchronization... If you want to build your graphical tools - WinForms and WPF are there waiting for you. And everything is well integrated.
2) Productivity. C++ is not meant to be a robust language (I mean for RAD). C#, however, is designed for that. The least of all is garbage collection. In C#, you can remove all elements of a collection given some predicate in just one line of code (IEnumerable's extension methods + lambda expression). You will never worry about simulating a sprintf behaviour in C# - it is just adding the params keyword before the last argument. These are just very simple examples - but in C++ you don't even have descent exception mechanism, using exceptions is even considered highly ineffective. For example, there is no such thing as finally. In C++ your destructor is not always called. So, your program crashes and there is a pretty good chance to leave lots of resources unfreed. Cool, huh?
3) Events and component-driven programming. In C++ you can emulate this behaviour with boost (which sucks IMO), Qt's signals and slots, Gtkmm's signals and slots (the last two being extremely intuitive and nice to use; boost claims to have improved gtkmm, but in reality it just made something very nice into a complete disaster). However, the resulting code is ugly and it is not very easy to get started, because all the examples online are extremely simple and if you need something complicated, you are left on your own and you will lose lots of time of research for something that is a-matter-of-fact in C#. Look at System.Collections.ComponentModel.ObservableCollection and see what I mean. You can do it in C++, but you will lose a lot of time before you even start coding for something as simple as that. In boost's signals you don't have the full control of an event's add and remove overides (just as a property's get and set if you've never customized an event). Something essential for component-driven programming (well, objects in games to tend to be componets with events, properties and all) is data binding, which is something you have to emulate on your own in C++ (not that it is difficult, but every time you have to emulate a feature, there is a chance that you make a mistake somewhere, and most importantly - you lose time).
4. Visual studio. It is free for both C++ and C#. However, code completion in C++ isn't good at all. It is the best there is (Eclipse CDT is very close, however), but it still isn't as good. Microsoft is investing a lot of money in .NET, and I'm left with the feeling that they just neglect C++. There are .net-only features in Visual Studio, which I find invaluable. For example, before you start coding, you have some sort of design for your game. Well, you can visually (via drag&drop) create a class diagram, and the tool generates the skeleton code for you. Every method is "implemented" with throw new NotImplementedException(), so you can't forget to implement them :D. There are very nice shortcuts: automatic add of a using statement, make a property out of a field, rename using refractor, etc. Templates are very time-saving as well: type for, the double-press tab, and it generates the skeleton for you. The same for all code structures. There is a shortcut for automatic interface and abstract class implementation (skeleton).

In conclusion, productivity in C++ is just not good enough once you've been used to C# and .NET. I would use C# or Python for my projects over C++ anytime. If you want to use C# and OpenGL, you can try OpenTK - it seems very SDL-like (and is therefore very nice). For DirectX, you were told: XNA is very beginner-friendly.

Share this post


Link to post
Share on other sites
Just to add, SlimDX is nowdays good to use for development. It is (at least in my experience) stable and is not going to change _hughe_. But to be sure some of their developers should tell you these things.

if you are a beginner XNA is great. It depends on your needs as well. Are you going to develop an engine or are you going to make a simple scene with some effects, without bothering how it all actually works inside out...

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
C# is perfectly fine for game development, although it's limited to Microsoft platforms.


Come on Steve, you're smarter than that. C# isn't limited to Microsoft, I've seen C# games running on a Mac OS. Portability depends on what APIs you use, not the language. You'd have to use Mono instead of .NET and use Tao instead of DirectX.

SlimDX - Windows
XNA - Windows, XBOX360, Zune
OpenGL - Windows, Mac, Linux

Not trying to say SlimDX is bad, I actually prefer it to the others. It just isn't as portable. If you don't care about that than give it a try.

Share this post


Link to post
Share on other sites
Quote:
Original post by Scet
Quote:
Original post by Evil Steve
C# is perfectly fine for game development, although it's limited to Microsoft platforms.


Come on Steve, you're smarter than that. C# isn't limited to Microsoft, I've seen C# games running on a Mac OS. Portability depends on what APIs you use, not the language. You'd have to use Mono instead of .NET and use Tao instead of DirectX.
You're right, I was having some sort of brain fart [embarrass]

I should have said, it's not available on as many platforms as C++ is.

Share this post


Link to post
Share on other sites
Quote:
Original post by FeverGames
i wonder why you say he should take a look at xna. Xna isn't exactually an industry standard way of working, it is a premade sort of engine in which a lot is not possible because it is based on development for the 360, and secondary for the PC.


Whoa whoa whoa whoa let's hit the breaks and back up a bit here.

XNA is not an engine or anything close to it. It has a few optional components like the Game class and the Content Pipeline that could be parts of a much larger engine, but if so they would be a small part of it.

What is it that you think is "not possible" with it? On the PC side of things the Graphics component is a wrapper of D3D9. The only things you can't do are the vendor-specific hacks, which off the top of my head included Nvidia's hardware shadow maps, ATI's R2VB, and any of the old methods for binding the depth buffer as a texture. Not exactly deal breakers, IMO (especially when you consider that you gain compatibility with a console for those things).

As for your assertion that it's "based on development for the 360", I'm a little curious as to what leads you to say that. If you ask me or any of the other guys who develop for the 360 using XNA, I doubt you'll find any that wouldn't agree that the PC is a much better choice for running a managed framework.

Share this post


Link to post
Share on other sites
i say this because if you start implementing things like low level audio api's or world wide used physics engines you are not going to be able to deploy it on the 360 anymore, which is kind of a big part of the whole xna thing. Also it doesn't allow you to easily swap to stuff like DX10 or DX11 in future, if not never.

ok so XNA might not be an engine, but it does do lots of things for you which you might not want.

but this might go a bit to much offtopic. I am just not a big fan of XNA :) It's a great way to start fast with 3D and get results fast on both the Xbox and pc. Just don't pass the lines that microsoft gives you (audio specific, physics specific, scripting languages other then written in .NET) :)

Share this post


Link to post
Share on other sites
Awesome, thanks guys :) Explained alot there Dilyan, ty for that :) I was always told that C++ was the better language, but your explanation has alot more backing it up that the 'just cause' that I was told when I asked y :)
So, that leaves it at XNA/SlimDX/OpenGL. Is XNA a library(like MDX)? or more like MSVS? If I decided to use XNA/SlimDX whats compatabillity like with OGL? Can I create my own functions that utilise both without too much difficulty, or should they stay more-or-less apart?

Share this post


Link to post
Share on other sites
Quote:
Original post by mickliddy
Awesome, thanks guys :) Explained alot there Dilyan, ty for that :) I was always told that C++ was the better language, but your explanation has alot more backing it up that the 'just cause' that I was told when I asked y :)
So, that leaves it at XNA/SlimDX/OpenGL. Is XNA a library(like MDX)? or more like MSVS? If I decided to use XNA/SlimDX whats compatabillity like with OGL? Can I create my own functions that utilise both without too much difficulty, or should they stay more-or-less apart?


XNA is a library like Managed DirectX. The only caveat is you need to distribute the XNA runtime with the things you make if you wish to distribute them, where as the old MDX came pre-packaged with DirectX.

I'm not quite sure what you're asking about compatibility, though. Are you asking if the different api can work together? If so, I'd say more or less no. You wouldn't want to combine them with the same project. You could use different ones for different tools, obviously, but I don't see why you would work with multiple libraries.

If you're just asking for a comparison between the compatibility with SlimDX/XNA and OpenGL, then OpenGL is by far the most cross-platform compatible api. SlimDX and XNA are pretty much limited to Windows (XNA stuff can be deployed on the Xbox360/Zune) machines while OpenGL enjoys deployment on Mac, Linux, and Windows alike.

Share this post


Link to post
Share on other sites
Asking more if I build using SlimDX, can I also use OGL if I decide that Id like more platforms to be supported? I know very little about SDX, but if I say create a graphics device with SDX and then got it to 'do stuff'(yes I know, this is very descriptive... I did say I knew very little) - such as drawing some triangles and then rendering, can I do something like...

var UGC = Users Graphics Choice
switch(UGC)
case:SDX
//SDX Code to Set up device goes here.
break;
case:OGL
//OGL code that does the equivalent goes here.
break;
case:Hasnt Chosen
//Do something else

while(game is running)
{
if(UGC=SDX)
{
Draw what needs to be drawn here with SDX
}else{
Draw what needs to be drawn here with OGL
}
}

Share this post


Link to post
Share on other sites
You can do that, yes. However if you do decide you want to support more api, I would highly recommend you encapsulate the separate renderers into separate plugin modules or something and create some sort of unified rendering scheme for drawing. Having two totally different sets of rendering code in one area would look confusing as hell.

Share this post


Link to post
Share on other sites
awesome, thanks everyone for the replies :)
Ive decided to go with C#, and use SDX with the possibility of adding OGL at a later date. Maybe if Im lucky Ill have a few people who like the end-product xD

Share this post


Link to post
Share on other sites
Quote:
Original post by FeverGames
i say this because if you start implementing things like low level audio api's or world wide used physics engines you are not going to be able to deploy it on the 360 anymore, which is kind of a big part of the whole xna thing. Also it doesn't allow you to easily swap to stuff like DX10 or DX11 in future, if not never.

ok so XNA might not be an engine, but it does do lots of things for you which you might not want.

but this might go a bit to much offtopic. I am just not a big fan of XNA :) It's a great way to start fast with 3D and get results fast on both the Xbox and pc. Just don't pass the lines that microsoft gives you (audio specific, physics specific, scripting languages other then written in .NET) :)


Well we're obviously talking about the PC here (since we're comparing XNA with SlimDX, OpenGL, etc.) so I'm not sure how the restrictions of the 360 runtime environment apply. On the PC you have your choice to use whatever components of XNA you want. You're not at all limited in terms of physics, audio API's, or any of the other things you mentioned.

Share this post


Link to post
Share on other sites
well, I just got SDX and was having a mess around with it, when I noticed that it seems to lack a representative for DirectX.AudioVideoPlayback... is there any particular way that I should go about playing back AVI files(without having to pay an arm and a leg for BINK), or is this just very well hidden inside the SDK?

Share this post


Link to post
Share on other sites
Quote:
Original post by mickliddy
well, I just got SDX and was having a mess around with it, when I noticed that it seems to lack a representative for DirectX.AudioVideoPlayback... is there any particular way that I should go about playing back AVI files(without having to pay an arm and a leg for BINK), or is this just very well hidden inside the SDK?


As of now, the preferred method is to use DirectShow.Net, which is a separate project that wraps DirectShow; DirectX.AudioVideoPlayback is based off of that, IIRC. We may add some video support in the future, but for now it's not on the menu.

Share this post


Link to post
Share on other sites
i think if you want to 'wrap' two different api's together you want to go for the most low level api's like DX and OpenGL, since XNA comes with quite a few premade classes. If you want to be able to run a game on a platform that doesn't support .NET you have to be able to not use XNA as a reference (so you cannot use the premade classes from xna). Of course you can also simply not use the premade classes but then for me it sounds weird to use XNA, but that's more of a feeling than performance or anything other decision for me. of course it is all doable ;)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628314
    • Total Posts
      2982026
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now