• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Glass_Knife

OpenGL
Does Microsoft purposely slow down OpenGL?

27 posts in this topic

While reading the "Game Audio Tutorial" book, http://amzn.com/0240817265, it mentioned that OpenGL is faster than DirectX, but that windows and DirectX purposely slows OpenGL down that that DirectX appears to be faster. I have never heard this before, and I was just curious if there are any resources that talk about this?

Currently using some OpenGL code on windows, I've been wondering if DirectX would be better. This may change my mind. Plus, with Value releasing Steam for Linux on OpenGL, I figured it could be true.

Thanks,
0

Share this post


Link to post
Share on other sites

What samoth said. Besides, OpenGL is still widely used for scientific visualization and CAD/CAM work. If Microsoft gimped OpenGL, we'd be hearing about it from more sources than a book. Not to mention that someone would have figured a work around by now.

1

Share this post


Link to post
Share on other sites
Why you should use OpenGL and not DirectX - Interesting blog post on the subject.

Interesting read. Thanks for that!
2

Share this post


Link to post
Share on other sites

Yup, that article is too biased to be taken seriously.

haha, looks like phantom is still mad with the board, but heck I still am too.

It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only. Furthermore there are finally good GL tools we can use (gDebugger is now free, PerfStudio is great, so is NSight)

 

This answer  shows the history of OpenGL vs DirectX throughout the years and nicely explains why it was GL's pure fault in getting to the state it is now (barely used in games).

0

Share this post


Link to post
Share on other sites
Yes, it seems this topic took off. I wasn't trying to start a DirectX vs OpenGL cage match, but I don't mind all the articles. There are lots of different points of view.

I couldn't find anything about OpenGL being slowed down. The author was probably biased, and reacting to the FUD campaign discussed previously.

I've used (and cursed) both APIs, and, like any tool, they've got their uses.
1

Share this post


Link to post
Share on other sites
It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only.
OpenGL|ES is certainly much saner than OpenGL, in the mobile world it's a good thing indeed.

OpenGL I still consider 'broken' while the bind-to-edit model still exist - it's just too easy to introduce bugs and unexpected behaviour (take VAO's; bind VAO, then bind another buffer and BAM! your original VAO is now changed unexpectedly). Don't get me wrong, OpenGL is improving and needs to because without a strong API to counter it D3D will continue to slow down and coast a bit, but bind-to-edit is just so weak when compared to immutable objects and the explicate edit model of D3D.
(Which I consider annoying as there are at least two features of GL (multi-drawindirect and AMD's PRT extension) which I'd like to play with, but every time I think about using GL it makes me sad :( )

As for DX11.1 - some of it is coming back to Win7 as they need it for IE10 support; I can't recall which bits off the top of my head however, nor can I recall if the interesting bits are. Edited by phantom
2

Share this post


Link to post
Share on other sites
<blockquote class="ipsBlockquote" data-author="Matias Goldberg" data-cid="5020390"><p>Yup, that article is too biased to be taken seriously.<br />haha, looks like phantom is still mad with the board, but heck I still am too.<br />It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only. Furthermore there are finally good GL tools we can use (gDebugger is now free, PerfStudio is great, so is NSight)<br /> <br /><a data-cke-saved-href="http://programmers.stackexchange.com/a/88055" href="http://programmers.stackexchange.com/a/88055">This answer </a> shows the history of OpenGL vs DirectX throughout the years and nicely explains why it was GL's pure fault in getting to the state it is now (barely used in games).</p></blockquote><br />Wait what ? So will DX11 be the last DX version for Win7, that sounds pretty insane, i could understand not backporting DX10 to XP since XP was an ancient and outdated OS at the time but Win7 is still fairly new, OpenGL doesn't suck as badly today(Modern OpenGL is quite pleasant to work with) as it did when they pulled the plug on XP and Apple has gained ground, it seems to me that this could be a fairly risky move.

Allthough it could just be that this .1 release mostly adds tablet/touch related features and that we'll get a D3D12 version for Win7 anyway. Edited by SimonForsman
-1

Share this post


Link to post
Share on other sites

Why you should use OpenGL and not DirectX - Interesting blog post on the subject.

 

"Intresting" and mostly biased, rubbish and wrong.

 

I made a couple of blog posts on here taking the article apart - basically the guy doesn't like DX, has this rose tinted view about OpenGL and feels there is a vast conspiracy to Keep OpenGL Down... which is rubbish.

 

Even the 'zomg! faster draw calls!' point he made is a non-event; on DX9 with 'small' draw calls it was a problem but DX10 and DX11 have since removed it and 'small' draw calls are so far from the norm it isn't worth caring about.

 

(And as someone who was using OpenGL from ~99 until 2008 I have a certain perspective; heck some of the older members might recall me defending aspects of 'GL before the Longs Peak screw up, which is when I said 'bye' to using GL and went to the saner DX10 and now DX11 land...)

 

The biggest plus point for OpenGL is that it is currently the only way to access the latest GPU features on the Windows XP / Vista, and if you are *extremely careful* you can get the same code running on linux/macos. Semantics aside, there really isn't that much of a difference between D3D & GL4 imho. I'll give you the point about bind-to-edit, although thin wrappers (dressed up to look like D3D) seems to be the approach most people take these days. At least they've finally divorced the texture data from the sampler parameters! :)

2

Share this post


Link to post
Share on other sites

It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.

 

Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.

2

Share this post


Link to post
Share on other sites
I have a contract with Addison-Wesley Professional regarding an OpenGL ES 2.0 book and should stand on equal ground with the original author of the book that mislead you.
Here is my take on the whole situation.


Firstly, OpenGL wasn’t designed for real-time graphics. It was originally primarily for CAD and graphing, which is why their coordinate system puts [0,0] in the lower-left instead of in the upper-left as with every other rendering system on the planet.
But Microsoft® was also not very good at creating usable rendering pipelines back in the day.

Did Microsoft® ever try to slow down OpenGL? No. It wouldn’t even be possible since that is up to the vendor.
Did Microsoft® unfairly push DirectX? Yes. They never shipped any version of OpenGL natively other than 1.1. Intentionally, they tried to keep support for OpenGL to a minimum in order to gain support for DirectX.


Initially neither API was very good. OpenGL was designed for the wrong thing and DirectX was just practice.
At one time it was actually debatable as to whether or not Microsoft was slowing down OpenGL for its own good.

But Khronos kept pushing its state-driven design and Microsoft was forced to keep advancing DirectX.
But the end result is that OpenGL’s model ensures it will always be second-best to DirectX. It’s a state machine and while both API’s have flaws, OpenGL is a framework built on top of technology built for other purposes. The Khronos group saw the potential for overlap and took off with it, eventually creating OpenGL ES, which is basically a game-oriented version of an API that was meant for graphing. They took the best parts of OpenGL and put them into a game-oriented API package, but that alone is not enough.

Microsoft® stopped trying to figure out what we developers need and finally decided to play ball starting with DirectX 10. It was then no longer a matter of, “Our API supports this and this and that,” but a matter of, “We allow you to have access to everything, so you can do whatever the fuck you want”.

Due to Microsoft® not knowing a thing about graphics and OpenGL being originally intended for graphing, both API’s sucked at the start.
As they grew in parallel, there was never a point when either was intentionally slowed, but definitely a point when one was less-supported.

But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).
When DirectX 9 became stagnant OpenGL’s design allowed it to keep advancing, and it started to become a major competitor with DirectX.
But DirectX 10 and DirectX 11 gave more access to the underlying hardware, and this allowed it to take off, and OpenGL was left playing catch-up. OpenGL 4.0 is basically Khronos’s version of DirectX 11. If you look carefully you will notice that for a while it was Microsoft® who was adding features to DirectX based on OpenGL features, but later (and to this day) it was the opposite.


There has been some heightened competition between the 2 API’s, but at no point were either intentionally slowed.
OpenGL builds off a design that was initially flawed and from DirectX 10 it will always be the slower API. Its very design dictates that.


L. Spiro
2

Share this post


Link to post
Share on other sites

There is a whole load of nonsense and barely-informed FUD (in both the computing acronym and Glasgow vernacular senses of the word) on both sides of this particular argument.  The truth is that Microsoft wanted OpenGL; they wanted it to be good, they wanted it to run fast, they wanted it to run well - because they wanted to break into the CAD workstation market (games development ain't everything).  The problem was that they also wanted D3D, but that wasn't because of any evil conspiracy; it was because MS are (or were at the time) a fairly fragmented company where the left hand doesn't (or didn't at the time) even know what the right hand was doing.

 

That Wolfire blog article does more harm than good to the OpenGL "cause" because it's quite obviously ill-informed and biased, not to mention blatantly inaccurate and shamelessly untrue in many cases (full OpenGL on PS3 and mobile platforms?  Yeah right...)

 

D3D didn't succeed because of any of the paranoid crap that is so frequently put forward; D3D succeeded because it became good enough (ironically, in the very best "Unix tradition" of "worse is better") and offered a single, consistent and hardware-independent way of doing things at the same time as OpenGL was going off to loo-lah land with GL_ARB_do_it_this_way, GL_ARB_do_it_that_way and GL_ARB_do_it_t'other_way for every piece of essential functionality.

2

Share this post


Link to post
Share on other sites

On topic - since D3D became able to compete, MS simply didn't actively support GL, which is quite a stretch from slowed it down.

Off topic-

[quote name='mhagain' timestamp='1357948483' post='5020539']
(full OpenGL on PS3 and mobile platforms?  Yeah right...)
[/quote]Yeah, the portability argument for GL always irks me.

 

Desktops have GL1.x, 2.x, 3.x, 4.x.

Mobiles have GLES1.x, GLES2.x

Playstation has PSGL (which is just an emulation layer over GCM, giving a similar API to GLES)

 

Each of these are different APIs, and code written for one still does need to be ported to by used on another. Further, every GPU driver on Windows, and every version of an Apple OS contains it's own implementation of these APIs with slightly different behaviour, greatly complicating your QA procedures.

 

From a professional graphics programmer's viewpoint, if I was porting a game from Mac to Windows, there'd be a lot of merit in using GL on the Mac and D3D on windows, just so the implementation of the API is consistent and not driver-dependent...

0

Share this post


Link to post
Share on other sites
<blockquote class="ipsBlockquote" data-author="Servant of the Lord" data-cid="5020467"><p>It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.<br /> <br />Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.</p></blockquote><br />
The big part of the DX vs OpenGL difference on Windows for Valve most likely boils down to it being D3D9(Which has a higher drawcall overhead than OpenGL and D3D10+ and that can easily add up to a hundred or so microseconds per frame(This seems to be valves conclusion aswell),
It is pretty much irrelevant now since D3D9 is on its last legs anyway.

(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster) Edited by SimonForsman
-1

Share this post


Link to post
Share on other sites
But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

0

Share this post


Link to post
Share on other sites
(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster)

 

There was (and I'm working on memory here I admit) a couple of aspects to it one of which was real with regards to how OpenGL frame buffers would compose with the D3D driven desktop and windows however that one did get sorted out once MS gave a little on it with some pressure from the IHVs.

 

The other is, as you say, regarding the apparent OpenGL->D3D layering which many took to mean (without bothering to look into it, just looking at a slide) that OpenGL would sit on D3D; what it REALLY meant was MS was going/planning to provide a OGL1.4 implementation based on D3D (I'm not sure they ever did in the end at that.)

(At the time this was going down I was using OpenGL, I heard the above did a 'ffs...' and then once I looked at the details realised the panic was rubbish in this regard...)

 

With regards to MS 'slowing down' OpenGL; many many years ago they were on the ARB (pre-2003 I think?) so they had opportunity to do so with regards to the spec but they didn't have to. Back when the ARB was an infighting mess, a running conflict between the interests of ATi, NVidia, Intel, SGI & 3DLabs so getting anything done was a nightmare which is why nothing got done - GL2.0 was the first causality in that war and Longs Peak was the most recent even after they all started to get along..

1

Share this post


Link to post
Share on other sites
But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

 

It isn't and it isn't.

 

I think I can say that without the NDA Ninjas breaking my door down anyway...

1

Share this post


Link to post
Share on other sites

Back on topic, and regarding performance, one of the often overlooked differences between the two APIs is that OpenGL allows semi-arbitrary software fallbacks whereas D3D does not.  This is a fairly important distinction - with OpenGL a glDrawElements call (for example) is not allowed to fail, it's not specified to fail, and if the parameters supplied for the call (or for any of the setup required to make the call) exceed hardware capabilities (but while still being within the capabilities exposed by the driver) then it must be emulated in software.  Compare to D3D where you get what's available on the hardware and nothing else; that means that you may have a lot more work to do in order to ensure that you fit within those capabilities, but once you do that you know that your call will work and will not fall back.

 

There are perfectly valid arguments to be made for and against both design philosophies and I'm not going to make judgement for or against either here.

 

Regarding the original question - on another viewing it doesn't actually make any sense because it seems to come from the assumption that OpenGL is some form of pure software library.  Ummm, no, it's not.  OpenGL is a software interface to graphics hardware (page 1, line 1 of any OpenGL spec) and it's still the graphics hardware that is the ultimate arbiter of performance.  It's also the case that an OpenGL ICD is a full replacement for Microsoft's software implementation, so those OpenGL calls you make - they're being made to the graphics hardware vendor's implementation, not to anything provided by Microsoft.  If there are performance issues then take it up with your GL_VENDOR in the first instance.

1

Share this post


Link to post
Share on other sites
Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.
 
The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.
Generally speaking, console graphics APIs never exactly match desktop graphics APIs.
Because each console is built using a specific GPU from a specific vendor, the actual API used to control the GPU is usually written by that vendor (at at least in cooperation with them). This means the API may be based on a desktop API, and might end up being very similar to one, but it's going to be a lot simpler, and allow much lower level control due to it only targeting a single hardware spec.
Often functions that are implemented inside the driver on the PC, like the VRAM allocator, can (or must) be implemented in the game-engine.

Another (hypothetical) way to look at it --
MS builds the D3D11 API, and nVidia then has to write drivers that implement this API for each of their specific GPUs.
nVidia then sells a particular GPU to a console maker, who also needs a graphics API. nVidia ports their driver code to that console, with that code becomming the console's graphics API, which is going to end up looking quite similar to either D3D11 or GL4.
0

Share this post


Link to post
Share on other sites
It may be faster in the exact specific way that Valve's (aging) engine architecture could benefit from, or those specific video cards they were testing from, but that doesn't mean OpenGL is faster than DirectX in general. Part of the speed gain they even hinted was from Linux vs Windows and not OpenGL vs DirectX specifically, and the difference between 3.30 and 3.17 milliseconds per frame is only 0.13 milliseconds.

 

Being that they both are different ways of accessing the same videocard, and neither does alot of heavy processing themselves, they should both be fairly close in speed.

 

Somebody in another forum tried to explain it, according to him OpenGL is somewhat more loose than Direct3D when it comes to resource management which could allow the driver to make some more optimizations. Never used Direct3D so I can't tell. Then again, it was Nvidia hardware Valve was using for testing, and Nvidia loves OpenGL. Make what you want out of it =P

 

The sad thing though is that apparently Valve was just using a Direct3D wrapper (ala Wine). Ouch if this is the case, that'd mean emulated Direct3D is faster than Direct3D itself...

 

There is a whole load of nonsense and barely-informed FUD (in both the computing acronym and Glasgow vernacular senses of the word) on both sides of this particular argument.  The truth is that Microsoft wanted OpenGL; they wanted it to be good, they wanted it to run fast, they wanted it to run well - because they wanted to break into the CAD workstation market (games development ain't everything).  The problem was that they also wanted D3D, but that wasn't because of any evil conspiracy; it was because MS are (or were at the time) a fairly fragmented company where the left hand doesn't (or didn't at the time) even know what the right hand was doing.

 

Windows NT team vs. Windows 95 team? Because I had heard it had to do with something like that (with the Windows NT team not wanting to give back their OpenGL code to the Windows 95 team). I never found any reliable sources though so I'd rather call it FUD for now.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now