• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
reloadead

OpenGL
The future of graphic APIs

20 posts in this topic

Hey people,

 

I am in the (pre)process of making a 3D engine/framework to fool around in. This is mainly a learning thing for me at the moment, but I do want to keep an eye on the future. Most likely this thread will be more of a "rubber ducking" thing rather than a "Help me choose" thread, but I value good argumented opinions.

 

Now one of the bigger choices I am facing is whether to use OpenGL or DirectX, but I am not entirely sure which one would be the best pick with the future in mind together with learning new stuff.

 

The thing is. I made a couple of 3D frameworks for smaller assignments and I used ("modern") OpenGL for that, giving me an advantage over the fact that I already know a good deal about it, but in terms of using the graphics API in itself, there is less to learn in contrast to DirectX, which I only really was able to scratch the surface of in the past with D3D9.

 

One of the things that I am wondering about however is the future of both of these APIs.

 

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted. 

 

I also noticed that some companies (like valve) started switching to OpenGL for the sake of multiplatform compatibility and I don't really see DirectX going multiplatform anytime soon (if ever).

 

Also with the exception of the xbox consoles, I think most of the consoles use OpenGL or something more similar to OpenGL.

 

What do you think the future has in store for us? I did google a bit, but I can't really find any nice articles of the direction of graphics APIs for the (near) future.

 

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?

2

Share this post


Link to post
Share on other sites

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.


This I find questionable. There is so much going on behind the scenes that the only way for this to be possible is for nVidia to write their own API for their hardware. I wouldn't put it past them, and they have already done similar with CUDA, but I doubt this.

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?


If I could go back and do this all over again, I would cut back my time spent learning OpenGL and add time learning DirectX. Both APIs have their merits, but I only know OpenGL and that puts me at a distinct disadvantage. If I were to want to make a career out of this, I would want as broad a skill set as possible.
0

Share this post


Link to post
Share on other sites


The first is plus for OpenGL, the second for DirectX since the state machine approach of openGL is apparently broken and makes some things not as nice as with directX. Im not sure if that is going to a better direction (likely is) but for that reason you might want to at least try directX to see how things are implemented there.

 

I heard counter arguments though, that OpenGL in its current state is more robust. Can you clarify?

 


No, unless by "similar to OpenGL" you mean that they have a plain-C interface, or unless you'd also say that D3D is similar to OpenGL (which is kinda is)
The exception is the PS3, which provides it's own native graphics API (which everyone should be using), and also provides a crappy wrapper around it called PSGL, which makes it look similar to GL.

 

Well. I actually only programmed on the PSP and PS3 and in general it just felt pretty much the same as OpenGL, that's actually what I meant :)

 


Interpreting this statement another way though, your source could be implying that with the arrival of OpenCL/CUDA/DirectCompute, the GPU hardware is becoming more and more open to how it is used, rather than forcing us to follow the traditional pipeline specified by GL/D3D. That sentiment is definitely true -- GPU's have certainly become "GPGPU's" or "compute devices", that are extremely flexible.

 

I think that is most likely the case indeed. I'm not from the age of driver free programming, but I can imagine that there will always be a layer between or else there will most likely be a commercial entity that will make it.

0

Share this post


Link to post
Share on other sites

Both major APIs are converging towards the hardware, and the hardware from competing vendors is converging towards similar logical programming models, if not physical architecture. Beyond that, I think the trend will be to push programmability down into more stages of the pipeline -- truly programmable tessellation seems a shoe-in at some point, lots of smart people have said programmable texture sampling would be nice (although that gets rather hard to implement in hardware). Currently, the most modern GPUs are very optimized for compute, but still clearly graphics-first; I think in the future GPUs will flip this around and compute will really be the first-class citizen, but they'll maintain the necessary fixed-function components (texture sampling, ROPs, etc) to maintain leading graphics performance.

 

The only really divergent thing we know that's going to happen, is that nVidia is going to start putting ARM CPU cores in their GPUs. That'll probably have a lot of interesting applications that people are yet to think of.

1

Share this post


Link to post
Share on other sites

Up to a year ago I would have unreservedly recommended D3D - it's a cleaner API, drivers are more robust, better tools and support, and more consistent behaviour on different hardware/driver combos.  Nowadays - I'm not so sure.  I would have hoped that MS had learned their lesson from Vista - locking D3D versions to Windows versions is not a good idea - but it seems that they haven't.  None of that takes away from the fact that D3D9 and 11 are still the best-in-class of their generations, and even with the new XBox being 11.2 it's a safe bet that the majority of titles will still target vanilla 11 in the PC space for at least the next few years.

 

OpenGL's portability is not as big a deal as it's often made out to be.  Even if you don't give a damn about portability, you can still hit ~95% of the PC target market (assuming that the latest Steam survey is representative).  Unless you're going for a very specific, very specialized target audience where you know for a fact that the figures are different - don't even bother worrying about it.

 

The really big deal with OpenGL is that it does allow new hardware features to be accessed without also requiring an OS upgrade.  You can also drop in handling for them without having to also rewrite the rest of your renderer.  That of course needs to be balanced against the driver situation (a safe bet is to just copy whatever id Software titles do as you can be certain that at least is well supported) and the mess of vendor-specific extensions (which can be seriously detrimental to hardware portability).

 

Longer term it is always most beneficial to know both, of course.

1

Share this post


Link to post
Share on other sites

firstly, that's not a rant, but my point of view :)

 

 

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.

This was the state of affairs before Glide/GL/DX/etc arrived on the scene. It was a nightmare for developers.

 

secondly, I see it the other way around, it was unified, nice and easy to develop before the various APIs arrived. you've written your software in c, everything, from game logic, to sound mixing, to rasterizing triangles, blending pixels etc. was just one unified code.

you could write a triangle rasterizer in combination with voxel (heightmap) tracer like commanche or outcast. all you cared was one unified code base.

 

then those APIs came up, which started a real nightmare from a programmer's point (no matter wheter software or hw api, aka commandbuffer). it was the only way to get access to the speed of rasterization HW, those first version, like S3, were not even faster than CPUs, but were running in parallel, so you could do other stuff and in total it was faster. with Glide, GL, DX you've then completely lost your freedom to the devil.

back then, you've worried how many triangles or vertices or sprites you could render, that turned now to how many API calls you can do! imagin how retarded that actually is. with low level access on consoles with 10y HW, you can manage 10times the drawcalls of newest PC HW, just due to API limitations. and those are not getting better, but worse every time. DX was already like twice slower than oGL on windowsXP (that's what NVidia claimed in some presentation), Vista+DX10 should speed things up, by introducing state objects, but games for DX9 and DX10 showed that DX9 ran actually fast, for the simple reason, that DX9 games sparsely updated just few needed states, while a state object always changes the states, even if 90% of it is equal to the previous one (drivers don't do state guarding, it's not their job). DX11 with compute, you got another slow down, vendors recommend to not switch more than twice a frame between compute and 3d rendering, as this is a pipeline reconfiguration, they have to flush, halt, reconfigure, restart the pipe. (it's less bad nowadays, but starting with DX11 that was the case).

 

we can now handle more lights than we can handle objects on PC (without instancing) and if you look at games like Crysis from 2005 and compare to current games, you will see the same amount of drawcalls. just due to API limitations.

 

 

GPU vendors try to sell APIs like the solution for big incompatibility problems, but that's really just marketing. Look on CPUs, you can run 30y old code on current CPUs, you can recompile your c/c++ code to x86,arm,mips,powerpc and mostly 'just run' it.

 

programming GPUs 'without an API' doesn't mean you write your commandbuffer on HW level, that's not the point, that's the retarded start of APIs. Writing for GPUs would mean, that you create your c++ code (or whatever language you prefer that compiles). you compile it to an intermediate language (e.g. llvm opcode) and execute it on some part of the GPU. that part would start 'jobs' that do what you've intended to do. 

similar to the culling code DICE runs on compute, but for everything. you can transform all objects with simple c++ code, you can apply skinning, water simulation. you can draw triangles or trace heightmap voxels, if you want you can use path tracing or simple draw 2d sprites for partcile without any API calls from your desktop application to the gpu!

 

Nowadays even NVidia and ATI start to be unhappy with the 'API', which actually rather means that they want other APIs, but MS is just not updating as frequent as back then, the industry just does not care, most games would still run nicely on DX9, current consoles ARE DX9 level of HW.

 

so, anyone who wants a truly future API, should write 99% of the engine in OpenCL/Cuda. (I recommend the intel SDK, you can profile, debug etc. in Visual studio, just like normal  c++). you can push 100k drawcalls @ 60Hz if you like, you can keep DCT compressed textures on GPU and decompress tiles of them on demand, if you want. you can bake lightmaps on demand if you like to (like quake1 did with surface caches). you can implement some sub-d, you can do occlusion culling while drawing, on GPU, with 0 latency, you can filter PTX textures without hacking around to get proper filtering on borders.

 

 

 

and lets not even asking "isn't it slow". Vertexshader ran slower than TnL, and proper Pixelshader (see GeforceFX) were also ridiculously slow compared to 'hard wired pixel pipelines'. you could fit 3540 voodoo graphics chips into your GTX680 transistor limit, rasterizing 10+Billion triangle/s, 10x of what the GTX680 can. of course, that's naive+useless math, just like comparing a pure gpgpu pipeline with some hard wired triangle rasterizers.

 

 

2

Share this post


Link to post
Share on other sites

I'm really not happy with either API. sad.png

 

Direct3D 10 and 11 have been pretty solid (more so than OpenGL IMO), but I really don't trust where Microsoft has been heading and it seems worse to me every year (e.g. not making the latest graphics API available on all their popular operating systems). It's always been a problem for me that Direct3D is tied to a single company and OS, and now it seems it may be increasingly tied to [i]specific[/i] versions of that OS (artificial scarcity anyone?)

 

The fact that OpenGL is, well, open and available to a wide variety of platforms is great. That's a HUGE advantage over Direct3D. Unfortunately, I think the design of OpenGL is shit, to be honest, and then you combined that with the poor quality of the various drivers. I would love to see a new version of OpenGL (OGL 5 maybe?) that isn't based off a new set of hardware features, but is instead a ground up redesign of the API, with no focus on making it compatible with the previous versions and instead focus on making things work well. Maybe they could start by copying Direct3D 11 and then improve from there? biggrin.png I can dream.

 

What are you to do? Those are really your only two options if you want accelerated 3D today, and that is a damn shame, I think. What I would really love to see is for companies like AMD and Nvidia to open up their hardware specs and drivers. Maybe then it would be easier for competitive drivers or even competitive APIs to emerge. Maybe there will soon be a massive shift in CPU architecture. Instead of a handful of heavy-weight cores you'll have hundreds of light-weight cores. It would basically be like a GPU, only more freely programmable (no more drivers!) and at that point you could implement your OpenGL or alternate graphics API entirely in software. Again, I can dream.

2

Share this post


Link to post
Share on other sites

I would love to see a new version of OpenGL (OGL 5 maybe?) that isn't based off a new set of hardware features, but is instead a ground up redesign of the API, with no focus on making it compatible with the previous versions and instead focus on making things work well. Maybe they could start by copying Direct3D 11 and then improve from there? biggrin.png I can dream.


They tried that, it was known as Longs Peak and got scrapped in favour of GL3.0 to much uproar from many.
Both NV and AMD were firmly behind it but it got killed about 6 months before GL3.0's release due to reasons which were never really explained.

(Many people blamed CAD companies at the time but I heard from an IHV employee who was working on the LP spec that it wasn't them - my personal theory is that Apple and/or Blizzard put the boot in as Apple probably had no desire to redo their API and Blizzard wanted cross platform coverage with latest features... but that's just my opinion..)
2

Share this post


Link to post
Share on other sites


What do you think the future has in store for us?

OpenGL ES.

 

The install base (smart phones, tablets, embedded devices and WebGL) already dwarfs the install base of either DirectX or desktop GL. Plus, through judicious use of libraries, it runs anywhere DirectX does.

 

Does it offer all the latest and greatest features? Not yet, but it's catching up. And how many games actually require DX11, anyway?

1

Share this post


Link to post
Share on other sites


And how many games actually require DX11, anyway?

 

A lot of the best looking ones. biggrin.png

0

Share this post


Link to post
Share on other sites

A lot of the best looking ones. biggrin.png

They can use DX11 features, sure. But how many games actually require DX11?

 

The only one I am currently aware of is Crysis 3.

0

Share this post


Link to post
Share on other sites

 

A lot of the best looking ones. biggrin.png

They can use DX11 features, sure. But how many games actually require DX11?

 

The only one I am currently aware of is Crysis 3.

 

Crysis 3 doesn't actually require DX11-level functionality, it can run on DX10 hardware.

However I suspect that once PS4/XB1 games start coming out you'll see a lot of multiplat games with PC versions that require DX11-level functionality, since that's what's available on the consoles. That's exactly what happened with PS3/XB360 and SM3.0 hardware.

Anyway I think the best improvement you could get from a GPU-oriented API would be to get rid of the "bind to shader registers" model. Modern GPU's don't work like that anymore, and are capable of being much more generic in terms of accessing memory. It would be cool to be able have access to this flexibility on PC hardware, and also eliminate some of the ridiculous driver overhead.

Edited by MJP
0

Share this post


Link to post
Share on other sites

I did write a simple design for a graphics API on Beyond3D : http://forum.beyond3d.com/showthread.php?t=63565

Unfortunately no-one publicly commented on it (it's not nearly finished though) :(

 

I really want something way simpler and way more empowering than what we have, anyone who knows how GPU are working can just see that OpenGL/D3D are major obstacles in using them efficiently. Unfortunately some of that is due to OS stability/security :(

(Also if you worked on current gen consoles, you probably know of smaller API)

1

Share this post


Link to post
Share on other sites

Well one of the main reasons why APIs even exist in the first place is the huge variety of hardware.  A common API that abstracts those differences is absolutely essential otherwise you end up writing a different rendering back end for every piece of graphics hardware you want to support, and hope that it doesn't explode when vendors bring out a new generation.  API overhead is a tradeoff you make in order to get that, and - of course - consoles don't have this problem because all XBox 360s (for example) have the same hardware.  That's just not an apples-to-apples comparison.

 

Of course NVIDIA would like you to be coding direct to their hardware; that would give them a potential performance and quality advantage over their competitors.  It's their own benefit that they're really thinking of here.  AMD made similar kinda noises a couple of years back too.

 

Regading potential for quality - all current APIs are already there, and have been for some time.  John Carmack noted this back in April 2000: http://floodyberry.com/carmack/johnc_plan_2000.html#d20000429

 

 

Mark Peercy of SGI has shown, quite surprisingly, that all Renderman surface shaders can be decomposed into multi-pass graphics operations if two extensions are provided over basic OpenGL: the existing pixel texture extension, which allows dependent texture lookups (matrox already supports a form of this, and most vendors will over the next year), and signed, floating point colors through the graphics pipeline. It also makes heavy use of the existing, but rarely optimized, copyTexSubImage2D functionality for temporaries.

 

The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

0

Share this post


Link to post
Share on other sites

The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

That's a ridiculous statement. Of course you can "do everything with older API's just not in real time". Just like you could compute everything on the CPU then write the output in a texture then render it, or you could also "compute" everything with a pen and paper then push buttons to display the output, if you don't do any mistakes you'll also have the same output.

That's the point of new APIs, make thing easier and faster, and faster mean new algorithms allowed for real time simulation.

You think you are thinking outside the box while you are reinventing the wheel.

Edited by Titan.
2

Share this post


Link to post
Share on other sites

 


The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

That's a ridiculous statement. Of course you can "do everything with older API's just not in real time". Just like you could compute everything on the CPU then write the output in a texture then render it, or you could also "compute" everything with a pen and paper then push buttons to display the output, if you don't do any mistakes you'll also have the same output.

That's the point of new APIs, make thing easier and faster, and faster mean new algorithms allowed for real time simulation.

You think you are thinking outside the box while you are reinventing the wheel.

 

 

On the contrary, I think he makes a point that has some practical value, whereas you're taking what he says to some literal, logical extreme for pretty much no reason and not offering any useful information at all.

The truth is, most "D3D11" features can be implemented (or at least approximated) in a perfectly practical way in D3D9, etc. whereas computing everything on the CPU probably isn't practical at all.

0

Share this post


Link to post
Share on other sites

You want to keep it practical, well, which functionality could you really implement on DX9 that wouldn't completely ruin performance. tessellation with geometry shaders ? and how would you do instancing ? 

And if this evolution didn't brought anything "meaningful", which one really did since the first programmable graphic pipeline ?

 

btw, I do think compute everything on the CPU would be practical (not efficient), that's actually what this thread is somewhat, about the future of the APIs: give more freedom and let program directly the hardware... so getting closer to how CPU programming works.

0

Share this post


Link to post
Share on other sites

 


The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

That's a ridiculous statement. Of course you can "do everything with older API's just not in real time". Just like you could compute everything on the CPU then write the output in a texture then render it, or you could also "compute" everything with a pen and paper then push buttons to display the output, if you don't do any mistakes you'll also have the same output.

That's the point of new APIs, make thing easier and faster, and faster mean new algorithms allowed for real time simulation.

You think you are thinking outside the box while you are reinventing the wheel.

 

 

I don't believe it's a ridiculous statement but I do understand where you're coming from.

 

The statement was made with reference to claims often made that D3D11 can somehow offer better image quality, higher quality effects, etc.  Those claims are what is baloney; as soon as you have the ability to do arbitrary computations on the GPU, and sufficient precision for storage of temporaries as required, then you can effectively do anything.  Yes, new APIs offer better ways of doing it via new capabilities (actually they don't; new hardware offers the better way, the API just exposes that way to the programmer); yes, you can likewise do the very same in software, but saying that just reinforces the point, and I believe it's a valid point.  API evolution is no longer a major determining factor in image quality, but the capability of an API (and the underlying hardware) to achieve that level of image quality at reasonable framerates is where the majority of evolution has been focussed recently.

 

I think we're actually saying the same thing but coming from different directions here.

0

Share this post


Link to post
Share on other sites

 


What do you think the future has in store for us?

OpenGL ES.

 

The install base (smart phones, tablets, embedded devices and WebGL) already dwarfs the install base of either DirectX or desktop GL. Plus, through judicious use of libraries, it runs anywhere DirectX does.

 

Does it offer all the latest and greatest features? Not yet, but it's catching up. And how many games actually require DX11, anyway?

 

SmartPhones (and SmartWatches are coming)  seem too tiny (dsiplay) to me to really see half of whats being displayed (as we use a PC)  -- realistic lighting - if only I could actually see it (without plastering it to my eye)...

 

Tablets/WebGL  -  2 generations of hardware further probably  (power usage issues) but will be at least visible and big enough physically

 

Display SmartGlasses ??  by next decade might make all the rest obsolete

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
  • Popular Now