but, normal programmers don't exactly have a whole lot of say in all this (this being more a vendor and ARB issue), so whether or not it is *good*, is a secondary issue, to whether or not programmers have much choice in the matter.
but, in all, it is more all about trade-offs.
nothing is perfect, FWIW...
for what things it is not within ones' power to change, they can take it as it is, and make "locally optimal" tradeoffs within the set of available options.
Thing is though, there is absolutely no tradeoff with the bind-to-modify model. The model itself gives you absolutely nothing in return, and introduces a whole heap of needless and painful careful state tracking. Complex and fiddly code paths that have unwanted consequences have a price beyond the time taken to write them; you've also got to maintain the mess going forward, and if it's bad enough it can act to prevent you from introducing cool new features that would otherwise be utter simplicity.
This was obviously broken when GL_ARB_multitexture was introduced, was even more problematic when GL_ARB_vertex_buffer_object was introduced, and the ARB themselves are showing little inclination to resolve it. Thank heavens for GL_EXT_direct_state_access, which even id Software are using in much of their more recent code (see https://github.com/id-Software/DOOM-3-BFG/blob/master/neo/renderer/Image_load.cpp#L453 for example).
What's particularly annoying is that bind-to-modify was recognised as a potential problem as far back as the original GL_EXT_texture_object! See http://www.opengl.org/registry/specs/EXT/texture_object.txt. Even more annoying is that while some good new functionality has come in with a civilized DSA API - sampler objects, the promotion of the glProgramUniform calls to core - a whole heap has also come in without one - vertex attrib binding, texture storage, etc.
Shrugging it off with "ah just accept it, sure that's the price of portability" is not good enough; OpenGL used to be a great API and should be one again, and people acting as though this is not a problem (or even worse - acting as though it's somehow a good thing) are not helping that one little bit.
people have say in things they can influence or control, otherwise, they will just have to live with whatever they are given.
it is along similar lines to complaining about weaknesses in the x86 or ARM ISAs, or the SysV/AMD64 ABI, or some design issues in the core of C and C++, ... these things have problems, but for most end-developers, there isn't a whole lot of choice.
most people will just live with it, as a part of the natural cost of doing business...
the more significant question, then, is what sorts of platforms they want their code to run on, and this is where the tradeoffs come in.
Normal programmers do have a say. They can vote with their feet and walk away, which is exactly what has happened. That's a power that shouldn't be underestimated - it's the same power that forced Microsoft to re-examine what they did with Vista, for example.
Regarding platforms, this is a very muddied issue.
First of all, we can discount mobile platforms. They don't use OpenGL - they use GL ES, so unless you restrict yourself to a common subset of both, you're not going to hit those (even if you do, you'll get more pain and suffering from trying to get the performance up and from networking/sound/input/windowing system APIs than from developing for 2 graphics APIs anyway).
We can discount consoles. Even those which have GL available, it's GL ES too, and anyway the preferred approach is to use the console's native API instead.
That leaves 3 - Windows, Mac and Linux. Now things get even muddier.
The thing is, there are actually two types of "platform" at work here - software platforms (already mentioned) and hardware platforms (NV, AMD and Intel). Plus they don't have equal market shares. So it's actually incredibly misleading to talk in any way about number of platforms; instead you need to talk about the percentage of your potential target market that you're going to hit.
Here's the bit where things turn upside-down.
In the general case for gaming PCs we're talking something like 95% Windows, 4% Mac and 1% Linux. So even if you restrict yourself to something that's Windows-only, you're still potentially going to hit 95% of your target market.
Now let's go cross-platform on the software side, and look at those hardware platforms I mentioned. By being cross-platform in software you're hitting 100% of your target market, but - and it's a big but - OpenGL only runs reliably on one hardware platform on Windows, and that's NV. Best case (according to the latest Steam hardware survey) is that's 52%.
So, by being Windows-only you hit 95% of your target market but it performs reliably on 100% of those machines.
By being cross-platform you hit 100% of your target market but it performs reliably on only 52% of those machines.
That sucks, doesn't it?
Now, maybe you're developing for a very specialized community where the figures are skewed. If so then you know your audience and you go for it. Even gaming PCs (which I based my figures on) could be considered a specialized audience, but in the completely general case the figures look even worse. There's an awful lot of "home entertainment", "multimedia" or business-class PCs out there with Intel graphics, there's an awful lot of laptops, there's an awful lot of switchable-graphics monstrosities, there's an awful lot of users with OEM drivers, there's an awful lot of users who never upgrade their drivers.
And that's the final reality - it's not number of platforms that matters; that doesn't matter at all. It's percentage of target markets, and outside of specialized communities being cross-platform will get you a significantly lower percentage than being Windows-only.
Edited by mhagain, 13 February 2013 - 09:15 PM.