Sign in to follow this  
Glass_Knife

OpenGL Does Microsoft purposely slow down OpenGL?

Recommended Posts

Glass_Knife    8636
While reading the "Game Audio Tutorial" book, http://amzn.com/0240817265, it mentioned that OpenGL is faster than DirectX, but that windows and DirectX purposely slows OpenGL down that that DirectX appears to be faster. I have never heard this before, and I was just curious if there are any resources that talk about this?

Currently using some OpenGL code on windows, I've been wondering if DirectX would be better. This may change my mind. Plus, with Value releasing Steam for Linux on OpenGL, I figured it could be true.

Thanks,

Share this post


Link to post
Share on other sites
MarkS    3502

What samoth said. Besides, OpenGL is still widely used for scientific visualization and CAD/CAM work. If Microsoft gimped OpenGL, we'd be hearing about it from more sources than a book. Not to mention that someone would have figured a work around by now.

Share this post


Link to post
Share on other sites
Matias Goldberg    9576

Yup, that article is too biased to be taken seriously.

haha, looks like phantom is still mad with the board, but heck I still am too.

It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only. Furthermore there are finally good GL tools we can use (gDebugger is now free, PerfStudio is great, so is NSight)

 

This answer  shows the history of OpenGL vs DirectX throughout the years and nicely explains why it was GL's pure fault in getting to the state it is now (barely used in games).

Share this post


Link to post
Share on other sites

Here's your article, phantom, thanks for the counterpoints.

 

I merely posted the link to the blog post, but I did fail to qualify it. The OP was talking about a book where the author was making some claims about DirectX vs OpenGL and that OpenGL was deliberately slowed down; the author's incorrect information about Microsoft intentionally slowing down OpenGL is probably a result of the Vista FUD campaign and the media attention it received - the blog post I linked to explains that FUD campaign; I wasn't trying to say OpenGL is better than DirectX, nor do I think it is.

 

I shouldn't have used the blog post's title as the text of the hyperlink, as it makes it seem like I agree with the entirety of that post and entirely support the author's viewpoint (Which I actually mostly do, but by coming to my own conclusions, not just borrowing his).

 

Thanks for clarifying (in your article) that the FUD campaign wasn't actually a FUD campaign and that Carmack's quotes were taken out of context - both of those points are news to me!

 

I stand somewhere in the middle ground myself, though I lean more towards OpenGL. Currently I only work with 2D graphics, but when I make the move to 3D, I intend to use OpenGL for two reasons:

  1. Because I am targeting multiple platforms like Mac and Windows and Linux.
  2. Because some competition is good for everyone in the long haul.

...this is in spite of Direct X seeming to be (from an inexperienced outsider looking in) better designed, and not because I think all opensource software is superior in quality to proprietary software (and just because the standard is open, that doesn't mean the implementations are).

I'm prepared to bear the pain and annoyance of OpenGL inconsistencies across videocards, not because it is better overall, but because it is better for my goals.

Edited by Servant of the Lord

Share this post


Link to post
Share on other sites
Glass_Knife    8636
Yes, it seems this topic took off. I wasn't trying to start a DirectX vs OpenGL cage match, but I don't mind all the articles. There are lots of different points of view.

I couldn't find anything about OpenGL being slowed down. The author was probably biased, and reacting to the FUD campaign discussed previously.

I've used (and cursed) both APIs, and, like any tool, they've got their uses.

Share this post


Link to post
Share on other sites
_the_phantom_    11250
It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only.
OpenGL|ES is certainly much saner than OpenGL, in the mobile world it's a good thing indeed.

OpenGL I still consider 'broken' while the bind-to-edit model still exist - it's just too easy to introduce bugs and unexpected behaviour (take VAO's; bind VAO, then bind another buffer and BAM! your original VAO is now changed unexpectedly). Don't get me wrong, OpenGL is improving and needs to because without a strong API to counter it D3D will continue to slow down and coast a bit, but bind-to-edit is just so weak when compared to immutable objects and the explicate edit model of D3D.
(Which I consider annoying as there are at least two features of GL (multi-drawindirect and AMD's PRT extension) which I'd like to play with, but every time I think about using GL it makes me sad :( )

As for DX11.1 - some of it is coming back to Win7 as they need it for IE10 support; I can't recall which bits off the top of my head however, nor can I recall if the interesting bits are. Edited by phantom

Share this post


Link to post
Share on other sites
SimonForsman    7642
<blockquote class="ipsBlockquote" data-author="Matias Goldberg" data-cid="5020390"><p>Yup, that article is too biased to be taken seriously.<br />haha, looks like phantom is still mad with the board, but heck I still am too.<br />It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only. Furthermore there are finally good GL tools we can use (gDebugger is now free, PerfStudio is great, so is NSight)<br /> <br /><a data-cke-saved-href="http://programmers.stackexchange.com/a/88055" href="http://programmers.stackexchange.com/a/88055">This answer </a> shows the history of OpenGL vs DirectX throughout the years and nicely explains why it was GL's pure fault in getting to the state it is now (barely used in games).</p></blockquote><br />Wait what ? So will DX11 be the last DX version for Win7, that sounds pretty insane, i could understand not backporting DX10 to XP since XP was an ancient and outdated OS at the time but Win7 is still fairly new, OpenGL doesn't suck as badly today(Modern OpenGL is quite pleasant to work with) as it did when they pulled the plug on XP and Apple has gained ground, it seems to me that this could be a fairly risky move.

Allthough it could just be that this .1 release mostly adds tablet/touch related features and that we'll get a D3D12 version for Win7 anyway. Edited by SimonForsman

Share this post


Link to post
Share on other sites
RobTheBloke    2553

Why you should use OpenGL and not DirectX - Interesting blog post on the subject.

 

"Intresting" and mostly biased, rubbish and wrong.

 

I made a couple of blog posts on here taking the article apart - basically the guy doesn't like DX, has this rose tinted view about OpenGL and feels there is a vast conspiracy to Keep OpenGL Down... which is rubbish.

 

Even the 'zomg! faster draw calls!' point he made is a non-event; on DX9 with 'small' draw calls it was a problem but DX10 and DX11 have since removed it and 'small' draw calls are so far from the norm it isn't worth caring about.

 

(And as someone who was using OpenGL from ~99 until 2008 I have a certain perspective; heck some of the older members might recall me defending aspects of 'GL before the Longs Peak screw up, which is when I said 'bye' to using GL and went to the saner DX10 and now DX11 land...)

 

The biggest plus point for OpenGL is that it is currently the only way to access the latest GPU features on the Windows XP / Vista, and if you are *extremely careful* you can get the same code running on linux/macos. Semantics aside, there really isn't that much of a difference between D3D & GL4 imho. I'll give you the point about bind-to-edit, although thin wrappers (dressed up to look like D3D) seems to be the approach most people take these days. At least they've finally divorced the texture data from the sampler parameters! :)

Share this post


Link to post
Share on other sites

It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.

 

Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.

Share this post


Link to post
Share on other sites
Matias Goldberg    9576

Ah yes, bind to edit is a big flaw in OGL.

But not only that, what about "try to see if it's supported" or "try it, check glGetError(), if it works; it's supported" philosophy? I talk about that in the comments in Timothy Lottes' blog, it's really annoying and prone to errors.

 

It's a very old fashion that comes from the time where OGL was library aimed at "guaranteed rendering on every machine", so there was no such thing as unsupported feature because the library would fallback to SW rendering.

But there was never a way to query which features would cause SW emulation to kick in, and today usage is that OpenGL is just an API to wrap to the GPU hardware (except for pure emulation implementations, like Mesa).

Share this post


Link to post
Share on other sites
L. Spiro    25622
I have a contract with Addison-Wesley Professional regarding an OpenGL ES 2.0 book and should stand on equal ground with the original author of the book that mislead you.
Here is my take on the whole situation.


Firstly, OpenGL wasn’t designed for real-time graphics. It was originally primarily for CAD and graphing, which is why their coordinate system puts [0,0] in the lower-left instead of in the upper-left as with every other rendering system on the planet.
But Microsoft® was also not very good at creating usable rendering pipelines back in the day.

Did Microsoft® ever try to slow down OpenGL? No. It wouldn’t even be possible since that is up to the vendor.
Did Microsoft® unfairly push DirectX? Yes. They never shipped any version of OpenGL natively other than 1.1. Intentionally, they tried to keep support for OpenGL to a minimum in order to gain support for DirectX.


Initially neither API was very good. OpenGL was designed for the wrong thing and DirectX was just practice.
At one time it was actually debatable as to whether or not Microsoft was slowing down OpenGL for its own good.

But Khronos kept pushing its state-driven design and Microsoft was forced to keep advancing DirectX.
But the end result is that OpenGL’s model ensures it will always be second-best to DirectX. It’s a state machine and while both API’s have flaws, OpenGL is a framework built on top of technology built for other purposes. The Khronos group saw the potential for overlap and took off with it, eventually creating OpenGL ES, which is basically a game-oriented version of an API that was meant for graphing. They took the best parts of OpenGL and put them into a game-oriented API package, but that alone is not enough.

Microsoft® stopped trying to figure out what we developers need and finally decided to play ball starting with DirectX 10. It was then no longer a matter of, “Our API supports this and this and that,” but a matter of, “We allow you to have access to everything, so you can do whatever the fuck you want”.

Due to Microsoft® not knowing a thing about graphics and OpenGL being originally intended for graphing, both API’s sucked at the start.
As they grew in parallel, there was never a point when either was intentionally slowed, but definitely a point when one was less-supported.

But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).
When DirectX 9 became stagnant OpenGL’s design allowed it to keep advancing, and it started to become a major competitor with DirectX.
But DirectX 10 and DirectX 11 gave more access to the underlying hardware, and this allowed it to take off, and OpenGL was left playing catch-up. OpenGL 4.0 is basically Khronos’s version of DirectX 11. If you look carefully you will notice that for a while it was Microsoft® who was adding features to DirectX based on OpenGL features, but later (and to this day) it was the opposite.


There has been some heightened competition between the 2 API’s, but at no point were either intentionally slowed.
OpenGL builds off a design that was initially flawed and from DirectX 10 it will always be the slower API. Its very design dictates that.


L. Spiro

Share this post


Link to post
Share on other sites
mhagain    13430

There is a whole load of nonsense and barely-informed FUD (in both the computing acronym and Glasgow vernacular senses of the word) on both sides of this particular argument.  The truth is that Microsoft wanted OpenGL; they wanted it to be good, they wanted it to run fast, they wanted it to run well - because they wanted to break into the CAD workstation market (games development ain't everything).  The problem was that they also wanted D3D, but that wasn't because of any evil conspiracy; it was because MS are (or were at the time) a fairly fragmented company where the left hand doesn't (or didn't at the time) even know what the right hand was doing.

 

That Wolfire blog article does more harm than good to the OpenGL "cause" because it's quite obviously ill-informed and biased, not to mention blatantly inaccurate and shamelessly untrue in many cases (full OpenGL on PS3 and mobile platforms?  Yeah right...)

 

D3D didn't succeed because of any of the paranoid crap that is so frequently put forward; D3D succeeded because it became good enough (ironically, in the very best "Unix tradition" of "worse is better") and offered a single, consistent and hardware-independent way of doing things at the same time as OpenGL was going off to loo-lah land with GL_ARB_do_it_this_way, GL_ARB_do_it_that_way and GL_ARB_do_it_t'other_way for every piece of essential functionality.

Share this post


Link to post
Share on other sites
Hodgman    51234

On topic - since D3D became able to compete, MS simply didn't actively support GL, which is quite a stretch from slowed it down.

Off topic-

[quote name='mhagain' timestamp='1357948483' post='5020539']
(full OpenGL on PS3 and mobile platforms?  Yeah right...)
[/quote]Yeah, the portability argument for GL always irks me.

 

Desktops have GL1.x, 2.x, 3.x, 4.x.

Mobiles have GLES1.x, GLES2.x

Playstation has PSGL (which is just an emulation layer over GCM, giving a similar API to GLES)

 

Each of these are different APIs, and code written for one still does need to be ported to by used on another. Further, every GPU driver on Windows, and every version of an Apple OS contains it's own implementation of these APIs with slightly different behaviour, greatly complicating your QA procedures.

 

From a professional graphics programmer's viewpoint, if I was porting a game from Mac to Windows, there'd be a lot of merit in using GL on the Mac and D3D on windows, just so the implementation of the API is consistent and not driver-dependent...

Share this post


Link to post
Share on other sites
SimonForsman    7642
<blockquote class="ipsBlockquote" data-author="Servant of the Lord" data-cid="5020467"><p>It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.<br /> <br />Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.</p></blockquote><br />
The big part of the DX vs OpenGL difference on Windows for Valve most likely boils down to it being D3D9(Which has a higher drawcall overhead than OpenGL and D3D10+ and that can easily add up to a hundred or so microseconds per frame(This seems to be valves conclusion aswell),
It is pretty much irrelevant now since D3D9 is on its last legs anyway.

(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster) Edited by SimonForsman

Share this post


Link to post
Share on other sites
But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

Share this post


Link to post
Share on other sites
_the_phantom_    11250
(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster)

 

There was (and I'm working on memory here I admit) a couple of aspects to it one of which was real with regards to how OpenGL frame buffers would compose with the D3D driven desktop and windows however that one did get sorted out once MS gave a little on it with some pressure from the IHVs.

 

The other is, as you say, regarding the apparent OpenGL->D3D layering which many took to mean (without bothering to look into it, just looking at a slide) that OpenGL would sit on D3D; what it REALLY meant was MS was going/planning to provide a OGL1.4 implementation based on D3D (I'm not sure they ever did in the end at that.)

(At the time this was going down I was using OpenGL, I heard the above did a 'ffs...' and then once I looked at the details realised the panic was rubbish in this regard...)

 

With regards to MS 'slowing down' OpenGL; many many years ago they were on the ARB (pre-2003 I think?) so they had opportunity to do so with regards to the spec but they didn't have to. Back when the ARB was an infighting mess, a running conflict between the interests of ATi, NVidia, Intel, SGI & 3DLabs so getting anything done was a nightmare which is why nothing got done - GL2.0 was the first causality in that war and Longs Peak was the most recent even after they all started to get along..

Share this post


Link to post
Share on other sites
_the_phantom_    11250
But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

 

It isn't and it isn't.

 

I think I can say that without the NDA Ninjas breaking my door down anyway...

Share this post


Link to post
Share on other sites
mhagain    13430

Back on topic, and regarding performance, one of the often overlooked differences between the two APIs is that OpenGL allows semi-arbitrary software fallbacks whereas D3D does not.  This is a fairly important distinction - with OpenGL a glDrawElements call (for example) is not allowed to fail, it's not specified to fail, and if the parameters supplied for the call (or for any of the setup required to make the call) exceed hardware capabilities (but while still being within the capabilities exposed by the driver) then it must be emulated in software.  Compare to D3D where you get what's available on the hardware and nothing else; that means that you may have a lot more work to do in order to ensure that you fit within those capabilities, but once you do that you know that your call will work and will not fall back.

 

There are perfectly valid arguments to be made for and against both design philosophies and I'm not going to make judgement for or against either here.

 

Regarding the original question - on another viewing it doesn't actually make any sense because it seems to come from the assumption that OpenGL is some form of pure software library.  Ummm, no, it's not.  OpenGL is a software interface to graphics hardware (page 1, line 1 of any OpenGL spec) and it's still the graphics hardware that is the ultimate arbiter of performance.  It's also the case that an OpenGL ICD is a full replacement for Microsoft's software implementation, so those OpenGL calls you make - they're being made to the graphics hardware vendor's implementation, not to anything provided by Microsoft.  If there are performance issues then take it up with your GL_VENDOR in the first instance.

Share this post


Link to post
Share on other sites
Hodgman    51234
Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.
 
The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.
Generally speaking, console graphics APIs never exactly match desktop graphics APIs.
Because each console is built using a specific GPU from a specific vendor, the actual API used to control the GPU is usually written by that vendor (at at least in cooperation with them). This means the API may be based on a desktop API, and might end up being very similar to one, but it's going to be a lot simpler, and allow much lower level control due to it only targeting a single hardware spec.
Often functions that are implemented inside the driver on the PC, like the VRAM allocator, can (or must) be implemented in the game-engine.

Another (hypothetical) way to look at it --
MS builds the D3D11 API, and nVidia then has to write drivers that implement this API for each of their specific GPUs.
nVidia then sells a particular GPU to a console maker, who also needs a graphics API. nVidia ports their driver code to that console, with that code becomming the console's graphics API, which is going to end up looking quite similar to either D3D11 or GL4.

Share this post


Link to post
Share on other sites
It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.

 

Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.

 

Somebody in another forum tried to explain it, according to him OpenGL is somewhat more loose than Direct3D when it comes to resource management which could allow the driver to make some more optimizations. Never used Direct3D so I can't tell. Then again, it was Nvidia hardware Valve was using for testing, and Nvidia loves OpenGL. Make what you want out of it =P

 

The sad thing though is that apparently Valve was just using a Direct3D wrapper (ala Wine). Ouch if this is the case, that'd mean emulated Direct3D is faster than Direct3D itself...

 

There is a whole load of nonsense and barely-informed FUD (in both the computing acronym and Glasgow vernacular senses of the word) on both sides of this particular argument.  The truth is that Microsoft wanted OpenGL; they wanted it to be good, they wanted it to run fast, they wanted it to run well - because they wanted to break into the CAD workstation market (games development ain't everything).  The problem was that they also wanted D3D, but that wasn't because of any evil conspiracy; it was because MS are (or were at the time) a fairly fragmented company where the left hand doesn't (or didn't at the time) even know what the right hand was doing.

 

Windows NT team vs. Windows 95 team? Because I had heard it had to do with something like that (with the Windows NT team not wanting to give back their OpenGL code to the Windows 95 team). I never found any reliable sources though so I'd rather call it FUD for now.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now