Game development on: Linux or Windows

Started by
45 comments, last by cr88192 11 years, 2 months ago

but, in all, it is more all about trade-offs.


nothing is perfect, FWIW...

for what things it is not within ones' power to change, they can take it as it is, and make "locally optimal" tradeoffs within the set of available options.

Thing is though, there is absolutely no tradeoff with the bind-to-modify model. The model itself gives you absolutely nothing in return, and introduces a whole heap of needless and painful careful state tracking. Complex and fiddly code paths that have unwanted consequences have a price beyond the time taken to write them; you've also got to maintain the mess going forward, and if it's bad enough it can act to prevent you from introducing cool new features that would otherwise be utter simplicity.

This was obviously broken when GL_ARB_multitexture was introduced, was even more problematic when GL_ARB_vertex_buffer_object was introduced, and the ARB themselves are showing little inclination to resolve it. Thank heavens for GL_EXT_direct_state_access, which even id Software are using in much of their more recent code (see https://github.com/id-Software/DOOM-3-BFG/blob/master/neo/renderer/Image_load.cpp#L453 for example).

What's particularly annoying is that bind-to-modify was recognised as a potential problem as far back as the original GL_EXT_texture_object! See http://www.opengl.org/registry/specs/EXT/texture_object.txt. Even more annoying is that while some good new functionality has come in with a civilized DSA API - sampler objects, the promotion of the glProgramUniform calls to core - a whole heap has also come in without one - vertex attrib binding, texture storage, etc.

Shrugging it off with "ah just accept it, sure that's the price of portability" is not good enough; OpenGL used to be a great API and should be one again, and people acting as though this is not a problem (or even worse - acting as though it's somehow a good thing) are not helping that one little bit.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Advertisement


but, in all, it is more all about trade-offs.

nothing is perfect, FWIW...

for what things it is not within ones' power to change, they can take it as it is, and make "locally optimal" tradeoffs within the set of available options.


Thing is though, there is absolutely no tradeoff with the bind-to-modify model. The model itself gives you absolutely nothing in return, and introduces a whole heap of needless and painful careful state tracking. Complex and fiddly code paths that have unwanted consequences have a price beyond the time taken to write them; you've also got to maintain the mess going forward, and if it's bad enough it can act to prevent you from introducing cool new features that would otherwise be utter simplicity.

This was obviously broken when GL_ARB_multitexture was introduced, was even more problematic when GL_ARB_vertex_buffer_object was introduced, and the ARB themselves are showing little inclination to resolve it. Thank heavens for GL_EXT_direct_state_access, which even id Software are using in much of their more recent code (see https://github.com/id-Software/DOOM-3-BFG/blob/master/neo/renderer/Image_load.cpp#L453 for example).

What's particularly annoying is that bind-to-modify was recognised as a potential problem as far back as the original GL_EXT_texture_object! See http://www.opengl.org/registry/specs/EXT/texture_object.txt. Even more annoying is that while some good new functionality has come in with a civilized DSA API - sampler objects, the promotion of the glProgramUniform calls to core - a whole heap has also come in without one - vertex attrib binding, texture storage, etc.

Shrugging it off with "ah just accept it, sure that's the price of portability" is not good enough; OpenGL used to be a great API and should be one again, and people acting as though this is not a problem (or even worse - acting as though it's somehow a good thing) are not helping that one little bit.


but, normal programmers don't exactly have a whole lot of say in all this (this being more a vendor and ARB issue), so whether or not it is *good*, is a secondary issue, to whether or not programmers have much choice in the matter.

people have say in things they can influence or control, otherwise, they will just have to live with whatever they are given.


it is along similar lines to complaining about weaknesses in the x86 or ARM ISAs, or the SysV/AMD64 ABI, or some design issues in the core of C and C++, ... these things have problems, but for most end-developers, there isn't a whole lot of choice.

most people will just live with it, as a part of the natural cost of doing business...

the more significant question, then, is what sorts of platforms they want their code to run on, and this is where the tradeoffs come in.

but, in all, it is more all about trade-offs.

nothing is perfect, FWIW...

for what things it is not within ones' power to change, they can take it as it is, and make "locally optimal" tradeoffs within the set of available options.


Thing is though, there is absolutely no tradeoff with the bind-to-modify model. The model itself gives you absolutely nothing in return, and introduces a whole heap of needless and painful careful state tracking. Complex and fiddly code paths that have unwanted consequences have a price beyond the time taken to write them; you've also got to maintain the mess going forward, and if it's bad enough it can act to prevent you from introducing cool new features that would otherwise be utter simplicity.

This was obviously broken when GL_ARB_multitexture was introduced, was even more problematic when GL_ARB_vertex_buffer_object was introduced, and the ARB themselves are showing little inclination to resolve it. Thank heavens for GL_EXT_direct_state_access, which even id Software are using in much of their more recent code (see https://github.com/id-Software/DOOM-3-BFG/blob/master/neo/renderer/Image_load.cpp#L453 for example).

What's particularly annoying is that bind-to-modify was recognised as a potential problem as far back as the original GL_EXT_texture_object! See http://www.opengl.org/registry/specs/EXT/texture_object.txt. Even more annoying is that while some good new functionality has come in with a civilized DSA API - sampler objects, the promotion of the glProgramUniform calls to core - a whole heap has also come in without one - vertex attrib binding, texture storage, etc.

Shrugging it off with "ah just accept it, sure that's the price of portability" is not good enough; OpenGL used to be a great API and should be one again, and people acting as though this is not a problem (or even worse - acting as though it's somehow a good thing) are not helping that one little bit.

but, normal programmers don't exactly have a whole lot of say in all this (this being more a vendor and ARB issue), so whether or not it is *good*, is a secondary issue, to whether or not programmers have much choice in the matter.

people have say in things they can influence or control, otherwise, they will just have to live with whatever they are given.


it is along similar lines to complaining about weaknesses in the x86 or ARM ISAs, or the SysV/AMD64 ABI, or some design issues in the core of C and C++, ... these things have problems, but for most end-developers, there isn't a whole lot of choice.

most people will just live with it, as a part of the natural cost of doing business...

the more significant question, then, is what sorts of platforms they want their code to run on, and this is where the tradeoffs come in.

Normal programmers do have a say. They can vote with their feet and walk away, which is exactly what has happened. That's a power that shouldn't be underestimated - it's the same power that forced Microsoft to re-examine what they did with Vista, for example.

Regarding platforms, this is a very muddied issue.

First of all, we can discount mobile platforms. They don't use OpenGL - they use GL ES, so unless you restrict yourself to a common subset of both, you're not going to hit those (even if you do, you'll get more pain and suffering from trying to get the performance up and from networking/sound/input/windowing system APIs than from developing for 2 graphics APIs anyway).

We can discount consoles. Even those which have GL available, it's GL ES too, and anyway the preferred approach is to use the console's native API instead.

That leaves 3 - Windows, Mac and Linux. Now things get even muddier.

The thing is, there are actually two types of "platform" at work here - software platforms (already mentioned) and hardware platforms (NV, AMD and Intel). Plus they don't have equal market shares. So it's actually incredibly misleading to talk in any way about number of platforms; instead you need to talk about the percentage of your potential target market that you're going to hit.

Here's the bit where things turn upside-down.

In the general case for gaming PCs we're talking something like 95% Windows, 4% Mac and 1% Linux. So even if you restrict yourself to something that's Windows-only, you're still potentially going to hit 95% of your target market.

Now let's go cross-platform on the software side, and look at those hardware platforms I mentioned. By being cross-platform in software you're hitting 100% of your target market, but - and it's a big but - OpenGL only runs reliably on one hardware platform on Windows, and that's NV. Best case (according to the latest Steam hardware survey) is that's 52%.

So, by being Windows-only you hit 95% of your target market but it performs reliably on 100% of those machines.

By being cross-platform you hit 100% of your target market but it performs reliably on only 52% of those machines.

That sucks, doesn't it?

Now, maybe you're developing for a very specialized community where the figures are skewed. If so then you know your audience and you go for it. Even gaming PCs (which I based my figures on) could be considered a specialized audience, but in the completely general case the figures look even worse. There's an awful lot of "home entertainment", "multimedia" or business-class PCs out there with Intel graphics, there's an awful lot of laptops, there's an awful lot of switchable-graphics monstrosities, there's an awful lot of users with OEM drivers, there's an awful lot of users who never upgrade their drivers.

And that's the final reality - it's not number of platforms that matters; that doesn't matter at all. It's percentage of target markets, and outside of specialized communities being cross-platform will get you a significantly lower percentage than being Windows-only.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Normal programmers do have a say. They can vote with their feet and walk away, which is exactly what has happened. That's a power that shouldn't be underestimated - it's the same power that forced Microsoft to re-examine what they did with Vista, for example.

Regarding platforms, this is a very muddied issue.

First of all, we can discount mobile platforms. They don't use OpenGL - they use GL ES, so unless you restrict yourself to a common subset of both, you're not going to hit those (even if you do, you'll get more pain and suffering from trying to get the performance up and from networking/sound/input/windowing system APIs than from developing for 2 graphics APIs anyway).

We can discount consoles. Even those which have GL available, it's GL ES too, and anyway the preferred approach is to use the console's native API instead.

I am including OpenGL ES, while not strictly traditional OpenGL, it is close-enough to where a renderer can target both (granted, with some glossing, wrapping, and ifdef's in a few areas).

That leaves 3 - Windows, Mac and Linux. Now things get even muddier.

The thing is, there are actually two types of "platform" at work here - software platforms (already mentioned) and hardware platforms (NV, AMD and Intel). Plus they don't have equal market shares. So it's actually incredibly misleading to talk in any way about number of platforms; instead you need to talk about the percentage of your potential target market that you're going to hit.

Here's the bit where things turn upside-down.

In the general case for gaming PCs we're talking something like 95% Windows, 4% Mac and 1% Linux. So even if you restrict yourself to something that's Windows-only, you're still potentially going to hit 95% of your target market.

Now let's go cross-platform on the software side, and look at those hardware platforms I mentioned. By being cross-platform in software you're hitting 100% of your target market, but - and it's a big but - OpenGL only runs reliably on one hardware platform on Windows, and that's NV. Best case (according to the latest Steam hardware survey) is that's 52%.

So, by being Windows-only you hit 95% of your target market but it performs reliably on 100% of those machines.
By being cross-platform you hit 100% of your target market but it performs reliably on only 52% of those machines.

That sucks, doesn't it?

I have generally had good enough success with OpenGL on ATI cards, so no huge issue here (previously, I was doing a lot of development using an ATI card, but currently I am using an NV card).

the big suck usually comes up with Intel chipsets, but they don't really work well in general IME.

Now, maybe you're developing for a very specialized community where the figures are skewed. If so then you know your audience and you go for it. Even gaming PCs (which I based my figures on) could be considered a specialized audience, but in the completely general case the figures look even worse. There's an awful lot of "home entertainment", "multimedia" or business-class PCs out there with Intel graphics, there's an awful lot of laptops, there's an awful lot of switchable-graphics monstrosities, there's an awful lot of users with OEM drivers, there's an awful lot of users who never upgrade their drivers.

And that's the final reality - it's not number of platforms that matters; that doesn't matter at all. It's percentage of target markets, and outside of specialized communities being cross-platform will get you a significantly lower percentage than being Windows-only.

doesn't do much for those of us who *do* use Linux sometimes though.

it also ignores, however, the possibility that the Windows branch of a 3D engine can also use D3D, if needed, while keeping an OpenGL backend around for portability.

but, OpenGL makes a good baseline, if a person doesn't want to be locked to a single target.

or, they can use OpenGL-ES, if targeting the cell-phone or browser-games market.


it is like, most of the world still uses 32-bit x86 for apps, but writing code that only works on 32-bit x86 still isn't a good idea.
"what if we need to build for 64-bits? or on a target running ARM?", "who ever heard of such a thing!".

doesn't mean a person can't have target-specific code (such as for performance or similar), but it shouldn't really be mandatory for basic operation either.

I knew at start this would get into a OpenGL vs D3D debate. And when I see them I ultimately think how pointless they are, because whats actually running all those fancy graphics is the hardware from NV or ATI and the drivers made by NV and ATI and those apis are just a thin(if not thin they would be inefficient) shell around this, which sole purpose is to unify access to these different drivers and hardware. And if one of those two api gives access to more types of systems it seems to me that it fulfills that purpose of unifying better.

Now sure you can target a large percentage of all machines with that other api and it looks shiny atm, but thats like when people thought "oh those Enron shares fare so well" and put all their money into it; at that moment it may have looked good, but if you make yourself dependent on a single company you dont control, it could all be over in 1 day, 1 month or 1 year and even though its very unlikely this company goes under noone wants to bet everything on that. That applies not only to this, but I feel its same with many other choices, like when people are only programming for iOS and then possibly realizing their app wont get accepted for whatever obscure reason.

Now for the threadstarter this is all irrelevant. He is still young and just learning programming so he could just pick anything he feels comfortable with and if he doesnt feel comfortable with Windows he could just loose interest in programming if he pressures himself into using it, even when later he needs to know about it.

I knew at start this would get into a OpenGL vs D3D debate. And when I see them I ultimately think how pointless they are, because whats actually running all those fancy graphics is the hardware from NV or ATI and the drivers made by NV and ATI and those apis are just a thin(if not thin they would be inefficient) shell around this, which sole purpose is to unify access to these different drivers and hardware. And if one of those two api gives access to more types of systems it seems to me that it fulfills that purpose of unifying better.

That's certainly true, and it's obvious when you see certain proponents of either API making claims like "this game has better image quality with OpenGL" or "that game sucks because it doesn't use D3D11" that these people really don't know what they're talking about.

the big suck usually comes up with Intel chipsets, but they don't really work well in general IME

It's dangerous to underestimate Intel. They've been quietly getting better over the past few years, and a HD3000 or 4000 is actually quite a competent and capable chip. Even going back as far as something like the 915, they had a SM2 part that was perfectly good enough for lighter-weight rendering work. A bit of a blip with their first hardware T&L parts, but those days are over.

They're not currently competitive with the big guns, of course, but if the trend continues (and all indications are that Intel are serious about and committed to this) then within a coupla hardware generations they're going to have something that just may upset the status quo a little.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


I knew at start this would get into a OpenGL vs D3D debate. And when I see them I ultimately think how pointless they are, because whats actually running all those fancy graphics is the hardware from NV or ATI and the drivers made by NV and ATI and those apis are just a thin(if not thin they would be inefficient) shell around this, which sole purpose is to unify access to these different drivers and hardware. And if one of those two api gives access to more types of systems it seems to me that it fulfills that purpose of unifying better.


That's certainly true, and it's obvious when you see certain proponents of either API making claims like "this game has better image quality with OpenGL" or "that game sucks because it doesn't use D3D11" that these people really don't know what they're talking about.


as I understand it, the main thing people do with either API at present is mostly using it to draw lots of big triangle arrays, and deal with textures and shaders (and occasionally render-to-texture and other things).

this stuff should then mostly boil down to the hardware.

there is a lot of the legacy functionality in OpenGL, but much of it is now either deprecated, or absent in GL-ES2, and in my engine most of this has since been moved over to wrappers anyways, and much of the lower-level state is managed by a "shader"/"material" system, ...


the big suck usually comes up with Intel chipsets, but they don't really work well in general IME


It's dangerous to underestimate Intel. They've been quietly getting better over the past few years, and a HD3000 or 4000 is actually quite a competent and capable chip. Even going back as far as something like the 915, they had a SM2 part that was perfectly good enough for lighter-weight rendering work. A bit of a blip with their first hardware T&L parts, but those days are over.

They're not currently competitive with the big guns, of course, but if the trend continues (and all indications are that Intel are serious about and committed to this) then within a coupla hardware generations they're going to have something that just may upset the status quo a little.


I have a 2009 laptop, and it has an Intel chipset ("Intel GMA" / "Intel Mobile Graphics").
its graphical performance is... not exactly good... along the lines that the newest games it plays well are Quake 2/3 and Half-Life.
Half-Life 2 performance was pretty bad, Portal doesn't work, Doom3 is pretty dismal as well, ...
Minecraft is "barely usable", ...

maybe "Intel HD" is better, dunno, don't have a newer laptop...

This topic is closed to new replies.

Advertisement