• Advertisement
Sign in to follow this  

OpenGL Does Microsoft purposely slow down OpenGL?

This topic is 1832 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

While reading the "Game Audio Tutorial" book, http://amzn.com/0240817265, it mentioned that OpenGL is faster than DirectX, but that windows and DirectX purposely slows OpenGL down that that DirectX appears to be faster. I have never heard this before, and I was just curious if there are any resources that talk about this?

Currently using some OpenGL code on windows, I've been wondering if DirectX would be better. This may change my mind. Plus, with Value releasing Steam for Linux on OpenGL, I figured it could be true.

Thanks,

Share this post


Link to post
Share on other sites
Advertisement

What samoth said. Besides, OpenGL is still widely used for scientific visualization and CAD/CAM work. If Microsoft gimped OpenGL, we'd be hearing about it from more sources than a book. Not to mention that someone would have figured a work around by now.

Share this post


Link to post
Share on other sites
Why you should use OpenGL and not DirectX - Interesting blog post on the subject.

Interesting read. Thanks for that!

Share this post


Link to post
Share on other sites

Yup, that article is too biased to be taken seriously.

haha, looks like phantom is still mad with the board, but heck I still am too.

It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only. Furthermore there are finally good GL tools we can use (gDebugger is now free, PerfStudio is great, so is NSight)

 

This answer  shows the history of OpenGL vs DirectX throughout the years and nicely explains why it was GL's pure fault in getting to the state it is now (barely used in games).

Share this post


Link to post
Share on other sites

Here's your article, phantom, thanks for the counterpoints.

 

I merely posted the link to the blog post, but I did fail to qualify it. The OP was talking about a book where the author was making some claims about DirectX vs OpenGL and that OpenGL was deliberately slowed down; the author's incorrect information about Microsoft intentionally slowing down OpenGL is probably a result of the Vista FUD campaign and the media attention it received - the blog post I linked to explains that FUD campaign; I wasn't trying to say OpenGL is better than DirectX, nor do I think it is.

 

I shouldn't have used the blog post's title as the text of the hyperlink, as it makes it seem like I agree with the entirety of that post and entirely support the author's viewpoint (Which I actually mostly do, but by coming to my own conclusions, not just borrowing his).

 

Thanks for clarifying (in your article) that the FUD campaign wasn't actually a FUD campaign and that Carmack's quotes were taken out of context - both of those points are news to me!

 

I stand somewhere in the middle ground myself, though I lean more towards OpenGL. Currently I only work with 2D graphics, but when I make the move to 3D, I intend to use OpenGL for two reasons:

  1. Because I am targeting multiple platforms like Mac and Windows and Linux.
  2. Because some competition is good for everyone in the long haul.

...this is in spite of Direct X seeming to be (from an inexperienced outsider looking in) better designed, and not because I think all opensource software is superior in quality to proprietary software (and just because the standard is open, that doesn't mean the implementations are).

I'm prepared to bear the pain and annoyance of OpenGL inconsistencies across videocards, not because it is better overall, but because it is better for my goals.

Edited by Servant of the Lord

Share this post


Link to post
Share on other sites
Yes, it seems this topic took off. I wasn't trying to start a DirectX vs OpenGL cage match, but I don't mind all the articles. There are lots of different points of view.

I couldn't find anything about OpenGL being slowed down. The author was probably biased, and reacting to the FUD campaign discussed previously.

I've used (and cursed) both APIs, and, like any tool, they've got their uses.

Share this post


Link to post
Share on other sites
It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only.
OpenGL|ES is certainly much saner than OpenGL, in the mobile world it's a good thing indeed.

OpenGL I still consider 'broken' while the bind-to-edit model still exist - it's just too easy to introduce bugs and unexpected behaviour (take VAO's; bind VAO, then bind another buffer and BAM! your original VAO is now changed unexpectedly). Don't get me wrong, OpenGL is improving and needs to because without a strong API to counter it D3D will continue to slow down and coast a bit, but bind-to-edit is just so weak when compared to immutable objects and the explicate edit model of D3D.
(Which I consider annoying as there are at least two features of GL (multi-drawindirect and AMD's PRT extension) which I'd like to play with, but every time I think about using GL it makes me sad :( )

As for DX11.1 - some of it is coming back to Win7 as they need it for IE10 support; I can't recall which bits off the top of my head however, nor can I recall if the interesting bits are. Edited by phantom

Share this post


Link to post
Share on other sites
<blockquote class="ipsBlockquote" data-author="Matias Goldberg" data-cid="5020390"><p>Yup, that article is too biased to be taken seriously.<br />haha, looks like phantom is still mad with the board, but heck I still am too.<br />It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only. Furthermore there are finally good GL tools we can use (gDebugger is now free, PerfStudio is great, so is NSight)<br /> <br /><a data-cke-saved-href="http://programmers.stackexchange.com/a/88055" href="http://programmers.stackexchange.com/a/88055">This answer </a> shows the history of OpenGL vs DirectX throughout the years and nicely explains why it was GL's pure fault in getting to the state it is now (barely used in games).</p></blockquote><br />Wait what ? So will DX11 be the last DX version for Win7, that sounds pretty insane, i could understand not backporting DX10 to XP since XP was an ancient and outdated OS at the time but Win7 is still fairly new, OpenGL doesn't suck as badly today(Modern OpenGL is quite pleasant to work with) as it did when they pulled the plug on XP and Apple has gained ground, it seems to me that this could be a fairly risky move.

Allthough it could just be that this .1 release mostly adds tablet/touch related features and that we'll get a D3D12 version for Win7 anyway. Edited by SimonForsman

Share this post


Link to post
Share on other sites

Why you should use OpenGL and not DirectX - Interesting blog post on the subject.

 

"Intresting" and mostly biased, rubbish and wrong.

 

I made a couple of blog posts on here taking the article apart - basically the guy doesn't like DX, has this rose tinted view about OpenGL and feels there is a vast conspiracy to Keep OpenGL Down... which is rubbish.

 

Even the 'zomg! faster draw calls!' point he made is a non-event; on DX9 with 'small' draw calls it was a problem but DX10 and DX11 have since removed it and 'small' draw calls are so far from the norm it isn't worth caring about.

 

(And as someone who was using OpenGL from ~99 until 2008 I have a certain perspective; heck some of the older members might recall me defending aspects of 'GL before the Longs Peak screw up, which is when I said 'bye' to using GL and went to the saner DX10 and now DX11 land...)

 

The biggest plus point for OpenGL is that it is currently the only way to access the latest GPU features on the Windows XP / Vista, and if you are *extremely careful* you can get the same code running on linux/macos. Semantics aside, there really isn't that much of a difference between D3D & GL4 imho. I'll give you the point about bind-to-edit, although thin wrappers (dressed up to look like D3D) seems to be the approach most people take these days. At least they've finally divorced the texture data from the sampler parameters! :)

Share this post


Link to post
Share on other sites

It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.

 

Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.

Share this post


Link to post
Share on other sites

Ah yes, bind to edit is a big flaw in OGL.

But not only that, what about "try to see if it's supported" or "try it, check glGetError(), if it works; it's supported" philosophy? I talk about that in the comments in Timothy Lottes' blog, it's really annoying and prone to errors.

 

It's a very old fashion that comes from the time where OGL was library aimed at "guaranteed rendering on every machine", so there was no such thing as unsupported feature because the library would fallback to SW rendering.

But there was never a way to query which features would cause SW emulation to kick in, and today usage is that OpenGL is just an API to wrap to the GPU hardware (except for pure emulation implementations, like Mesa).

Share this post


Link to post
Share on other sites
I have a contract with Addison-Wesley Professional regarding an OpenGL ES 2.0 book and should stand on equal ground with the original author of the book that mislead you.
Here is my take on the whole situation.


Firstly, OpenGL wasn’t designed for real-time graphics. It was originally primarily for CAD and graphing, which is why their coordinate system puts [0,0] in the lower-left instead of in the upper-left as with every other rendering system on the planet.
But Microsoft® was also not very good at creating usable rendering pipelines back in the day.

Did Microsoft® ever try to slow down OpenGL? No. It wouldn’t even be possible since that is up to the vendor.
Did Microsoft® unfairly push DirectX? Yes. They never shipped any version of OpenGL natively other than 1.1. Intentionally, they tried to keep support for OpenGL to a minimum in order to gain support for DirectX.


Initially neither API was very good. OpenGL was designed for the wrong thing and DirectX was just practice.
At one time it was actually debatable as to whether or not Microsoft was slowing down OpenGL for its own good.

But Khronos kept pushing its state-driven design and Microsoft was forced to keep advancing DirectX.
But the end result is that OpenGL’s model ensures it will always be second-best to DirectX. It’s a state machine and while both API’s have flaws, OpenGL is a framework built on top of technology built for other purposes. The Khronos group saw the potential for overlap and took off with it, eventually creating OpenGL ES, which is basically a game-oriented version of an API that was meant for graphing. They took the best parts of OpenGL and put them into a game-oriented API package, but that alone is not enough.

Microsoft® stopped trying to figure out what we developers need and finally decided to play ball starting with DirectX 10. It was then no longer a matter of, “Our API supports this and this and that,” but a matter of, “We allow you to have access to everything, so you can do whatever the fuck you want”.

Due to Microsoft® not knowing a thing about graphics and OpenGL being originally intended for graphing, both API’s sucked at the start.
As they grew in parallel, there was never a point when either was intentionally slowed, but definitely a point when one was less-supported.

But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).
When DirectX 9 became stagnant OpenGL’s design allowed it to keep advancing, and it started to become a major competitor with DirectX.
But DirectX 10 and DirectX 11 gave more access to the underlying hardware, and this allowed it to take off, and OpenGL was left playing catch-up. OpenGL 4.0 is basically Khronos’s version of DirectX 11. If you look carefully you will notice that for a while it was Microsoft® who was adding features to DirectX based on OpenGL features, but later (and to this day) it was the opposite.


There has been some heightened competition between the 2 API’s, but at no point were either intentionally slowed.
OpenGL builds off a design that was initially flawed and from DirectX 10 it will always be the slower API. Its very design dictates that.


L. Spiro

Share this post


Link to post
Share on other sites

There is a whole load of nonsense and barely-informed FUD (in both the computing acronym and Glasgow vernacular senses of the word) on both sides of this particular argument.  The truth is that Microsoft wanted OpenGL; they wanted it to be good, they wanted it to run fast, they wanted it to run well - because they wanted to break into the CAD workstation market (games development ain't everything).  The problem was that they also wanted D3D, but that wasn't because of any evil conspiracy; it was because MS are (or were at the time) a fairly fragmented company where the left hand doesn't (or didn't at the time) even know what the right hand was doing.

 

That Wolfire blog article does more harm than good to the OpenGL "cause" because it's quite obviously ill-informed and biased, not to mention blatantly inaccurate and shamelessly untrue in many cases (full OpenGL on PS3 and mobile platforms?  Yeah right...)

 

D3D didn't succeed because of any of the paranoid crap that is so frequently put forward; D3D succeeded because it became good enough (ironically, in the very best "Unix tradition" of "worse is better") and offered a single, consistent and hardware-independent way of doing things at the same time as OpenGL was going off to loo-lah land with GL_ARB_do_it_this_way, GL_ARB_do_it_that_way and GL_ARB_do_it_t'other_way for every piece of essential functionality.

Share this post


Link to post
Share on other sites

On topic - since D3D became able to compete, MS simply didn't actively support GL, which is quite a stretch from slowed it down.

Off topic-

[quote name='mhagain' timestamp='1357948483' post='5020539']
(full OpenGL on PS3 and mobile platforms?  Yeah right...)
[/quote]Yeah, the portability argument for GL always irks me.

 

Desktops have GL1.x, 2.x, 3.x, 4.x.

Mobiles have GLES1.x, GLES2.x

Playstation has PSGL (which is just an emulation layer over GCM, giving a similar API to GLES)

 

Each of these are different APIs, and code written for one still does need to be ported to by used on another. Further, every GPU driver on Windows, and every version of an Apple OS contains it's own implementation of these APIs with slightly different behaviour, greatly complicating your QA procedures.

 

From a professional graphics programmer's viewpoint, if I was porting a game from Mac to Windows, there'd be a lot of merit in using GL on the Mac and D3D on windows, just so the implementation of the API is consistent and not driver-dependent...

Share this post


Link to post
Share on other sites
<blockquote class="ipsBlockquote" data-author="Servant of the Lord" data-cid="5020467"><p>It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.<br /> <br />Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.</p></blockquote><br />
The big part of the DX vs OpenGL difference on Windows for Valve most likely boils down to it being D3D9(Which has a higher drawcall overhead than OpenGL and D3D10+ and that can easily add up to a hundred or so microseconds per frame(This seems to be valves conclusion aswell),
It is pretty much irrelevant now since D3D9 is on its last legs anyway.

(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster) Edited by SimonForsman

Share this post


Link to post
Share on other sites
But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

Share this post


Link to post
Share on other sites
(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster)

 

There was (and I'm working on memory here I admit) a couple of aspects to it one of which was real with regards to how OpenGL frame buffers would compose with the D3D driven desktop and windows however that one did get sorted out once MS gave a little on it with some pressure from the IHVs.

 

The other is, as you say, regarding the apparent OpenGL->D3D layering which many took to mean (without bothering to look into it, just looking at a slide) that OpenGL would sit on D3D; what it REALLY meant was MS was going/planning to provide a OGL1.4 implementation based on D3D (I'm not sure they ever did in the end at that.)

(At the time this was going down I was using OpenGL, I heard the above did a 'ffs...' and then once I looked at the details realised the panic was rubbish in this regard...)

 

With regards to MS 'slowing down' OpenGL; many many years ago they were on the ARB (pre-2003 I think?) so they had opportunity to do so with regards to the spec but they didn't have to. Back when the ARB was an infighting mess, a running conflict between the interests of ATi, NVidia, Intel, SGI & 3DLabs so getting anything done was a nightmare which is why nothing got done - GL2.0 was the first causality in that war and Longs Peak was the most recent even after they all started to get along..

Share this post


Link to post
Share on other sites
But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

 

It isn't and it isn't.

 

I think I can say that without the NDA Ninjas breaking my door down anyway...

Share this post


Link to post
Share on other sites

Back on topic, and regarding performance, one of the often overlooked differences between the two APIs is that OpenGL allows semi-arbitrary software fallbacks whereas D3D does not.  This is a fairly important distinction - with OpenGL a glDrawElements call (for example) is not allowed to fail, it's not specified to fail, and if the parameters supplied for the call (or for any of the setup required to make the call) exceed hardware capabilities (but while still being within the capabilities exposed by the driver) then it must be emulated in software.  Compare to D3D where you get what's available on the hardware and nothing else; that means that you may have a lot more work to do in order to ensure that you fit within those capabilities, but once you do that you know that your call will work and will not fall back.

 

There are perfectly valid arguments to be made for and against both design philosophies and I'm not going to make judgement for or against either here.

 

Regarding the original question - on another viewing it doesn't actually make any sense because it seems to come from the assumption that OpenGL is some form of pure software library.  Ummm, no, it's not.  OpenGL is a software interface to graphics hardware (page 1, line 1 of any OpenGL spec) and it's still the graphics hardware that is the ultimate arbiter of performance.  It's also the case that an OpenGL ICD is a full replacement for Microsoft's software implementation, so those OpenGL calls you make - they're being made to the graphics hardware vendor's implementation, not to anything provided by Microsoft.  If there are performance issues then take it up with your GL_VENDOR in the first instance.

Share this post


Link to post
Share on other sites
Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.
 
The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.
Generally speaking, console graphics APIs never exactly match desktop graphics APIs.
Because each console is built using a specific GPU from a specific vendor, the actual API used to control the GPU is usually written by that vendor (at at least in cooperation with them). This means the API may be based on a desktop API, and might end up being very similar to one, but it's going to be a lot simpler, and allow much lower level control due to it only targeting a single hardware spec.
Often functions that are implemented inside the driver on the PC, like the VRAM allocator, can (or must) be implemented in the game-engine.

Another (hypothetical) way to look at it --
MS builds the D3D11 API, and nVidia then has to write drivers that implement this API for each of their specific GPUs.
nVidia then sells a particular GPU to a console maker, who also needs a graphics API. nVidia ports their driver code to that console, with that code becomming the console's graphics API, which is going to end up looking quite similar to either D3D11 or GL4.

Share this post


Link to post
Share on other sites
It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.

 

Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.

 

Somebody in another forum tried to explain it, according to him OpenGL is somewhat more loose than Direct3D when it comes to resource management which could allow the driver to make some more optimizations. Never used Direct3D so I can't tell. Then again, it was Nvidia hardware Valve was using for testing, and Nvidia loves OpenGL. Make what you want out of it =P

 

The sad thing though is that apparently Valve was just using a Direct3D wrapper (ala Wine). Ouch if this is the case, that'd mean emulated Direct3D is faster than Direct3D itself...

 

There is a whole load of nonsense and barely-informed FUD (in both the computing acronym and Glasgow vernacular senses of the word) on both sides of this particular argument.  The truth is that Microsoft wanted OpenGL; they wanted it to be good, they wanted it to run fast, they wanted it to run well - because they wanted to break into the CAD workstation market (games development ain't everything).  The problem was that they also wanted D3D, but that wasn't because of any evil conspiracy; it was because MS are (or were at the time) a fairly fragmented company where the left hand doesn't (or didn't at the time) even know what the right hand was doing.

 

Windows NT team vs. Windows 95 team? Because I had heard it had to do with something like that (with the Windows NT team not wanting to give back their OpenGL code to the Windows 95 team). I never found any reliable sources though so I'd rather call it FUD for now.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  • Advertisement