How long would you support Shader Model 2?

Started by
15 comments, last by cozzie 10 years, 2 months ago
In any case, China spends almost as much as US on games.

That's exactly my point, though. China spends as much as the USA, but China has 4.3 times as many people. Which you could reword as: one Chinese customer brings the revenue of 23% of an US customer. Which is quite consistent with the numbers you find on the internet about Blizzard/WoW in China (in that case, if I remember correctly, the math came out as roughly 1/5).

On the other hand, localizing for Chinese is many times more complicated than localizing for English. It's already a challenge to do a good localization to somewhat similar, somewhat distantly related languages at all (like English-French-German-Spanish), but this is a totally different category.

Also, you are probably aware that the pirate market in Asia is ... huge. Now, "huge" is not the correct word, what is bigger than "huge"? Collossal.

Nowere else piracy has such a prevalence and is handled in such an open, natural way. You can buy illegitmate copies of everything (not just software, music, movies, consoles, and fake rolex) everywhere, in regular shops, openly. Unless your indie game makes online access mandatory, why should one assume it stops at indie games?

A ready-made game for the Chinese market? Perfect, they'll just rebrand it, change the start screen, and sell it themselves. It's not like they're not doing this with much bigger players, and there's nothing they can do against it. Because, let's face it, nobody cares what copyright or trademark or other rights you believe you may have. It simply doesn't count in China.

However, I'm in Australia, which is over 50x smaller than India. So, even if piracy rates there are 50x times greater, then overall revenues would remain the same for each market.

Well yes, I can see that. A billion times one cent is 10 million dollars :)

What I was wondering is just whether this math, in the net sum (after considering everything), breaks even. I feel more comfortable with a million times 10 dollars, which is the same sum.

But I guess it must "work", since companies are doing it and they're not going bankrupt.

Advertisement

SM2 is actually pretty nice because you know that all the fancy features aren't supported, so you can only write simple shaders

SM4 is nice too, because the D3D10 specification for SM4 cards is very strict about all standard features being supported.
SM3 is really annoying, because most of it's features are completely optional. You can even access some SM4-level features via driver hacks, but only on some cards.

Exactly. SM3 makes a lot of promises, and later you find out that you've been lied to. As in "yeah, we do MRT, but only with at most one render target". To me, "multiple" still means "greater than one".

Under SM2 you basically know that you have 2 guaranteed texture units, and the graphics card can do some very limited vertex and pixel shading with an instruction count of 256 or so, and no branching, and that's pretty much it. Not great, but you know what you have.

And, it's just enough to get something like normal mapping and blending detail textures, and a little guy walking and swining a gun/sword going.

Under SM3, you have all those things that are promised and then taken back in the small print, and you must verify dozens of things to be sure it works (and then, what do you do when something doesn't work... all you can do is fall back to SM2). In the end, the guarantees that you have are only marginally different from SM2.

Under SM4, you have the guarantee that you have 16 (or more) texture units per stage, you know that you have 4 (or maybe more) render targets, unlimited instructions, texture fetch at every stage, etc etc. Which is enough for anything reasonable that you might want to do, without having to worry or having to check anything.

In my opinion/ case, supporting sm2 als fallback is not a few minutes work. Both in the shaders as in my code. That is, if I want to keep the flexibility I have and want to keep.

For example with sm2 I run out of instructions with a few point lights, a directional light and specular highlights (maybe if I optimize, also some basic fog), that doesn't even include normal mapping.

Regarding SM3, what do you mean by having no guantees that "it will run" on all SM3 supported gpu's? (and having to check so much things, compared to sm4)
I also thought that checking for SM3 support means I can send in any shader that compiles as SM3 :)

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Very interesting topic. I'm curious which SM3 GPUs do not actually support 4 rendertargets for MRT or a sensible amount of texture units, are these some more obscure manufacturers outside NVIDIA / AMD / Intel?

Are those actual customers who pay for the game, too?

I always struggle to understand when companies target China because it's kind of well-known that they're only willing to pay a fraction of what everybody else is paying, if they don't pirate right away. Development cost and maintenance is the same, however. Insofar I always wonder how this math can work out (but apparently it does).

You're allowing prejudice to blind you to reality.

China's games industry revenue last year was $13.8 billion (38% YoY growth). US 2012 was $14.8 billion and shrinking. I haven't seen 2013 US numbers yet, but it should be a growth year (not 38% of course). In any case, China spends almost as much as US on games. They'll spend more than us on games within a year or two, especially with bans being lifted. They also spend more on lower-end games than US since they have lower-end hardware. That's a really good thing for indie devs.

That said, it's probably not worth targeting SM2. You'd probably launch in US before localizing to other areas, and then only localize if you made money. If you make enough money, you don't worry about adding SM2 support if you deem it necessary. When developing, try to delay as much complexity as possible until after you make money. If you make money, people will tell you to add SM2 support if they want it anyways.

Actually the US grew last year, but it was thanks to digital, which people don't track basically at all because there isn't any way to do so. The main difference is China loves free to play more than other areas do, less total disposable income on average means a $5 game (payed however) is going to earn more in total than any $50-$60 game for China. The main trouble with China is getting past the censors. They're basically a mix of outright protectionist rackets and the random bizarre whims of whoever happens to be in charge (EG no time travel, etc.)

But it can be done, it's a big market. Again it matters who your target audience is. If you're going free to play, then you're trying to hit as many people as possible, and SM2.0 is a good idea, along with absolute minimal specs. As the entry price goes up for your game, so does the average available income for your target audience, and you'll need to hit that minimum spec less and less.

Very interesting topic. I'm curious which SM3 GPUs do not actually support 4 rendertargets for MRT or a sensible amount of texture units, are these some more obscure manufacturers outside NVIDIA / AMD / Intel?

AMD was notably one of the companies cheating on SM3. Their early cards supported vertex texture with zero fetches and multiple rendertargets where multiple meant one. Later the same year (or early the next year, I don't remember) they brought out cards that did "proper" SM3.

Problem is, this is totally in accordance with the specification (both as far as DX and OpenGL are concerned, they explicitly allow those cheat values), and you have no way of knowing in advance (not before you query, or before you try and fail). You can't simply rely on "it's SM3, it will work". There are only few cards that won't work, but they exist, and they are entirely legitimate (actually they do work perfectly, only not the way you maybe expected).


Regarding SM3, what do you mean by having no guantees that "it will run" on all SM3 supported gpu's? (and having to check so much things, compared to sm4)
I also thought that checking for SM3 support means I can send in any shader that compiles as SM3 smile.png

You have a lot of limits which have predefined minimum values, such as maximum number of vertex fetches and max number of combined fetches, max number of dependent fetches, number of instructions, etc etc etc.

Unluckily, some minimum values are ridiculous (like, zero).

Your shader may fail to build due to running against some limit even if it is perfectly valid. In SM3, the limits are not of such a kind that you can pretty much assume never to run into one. In SM4, the limits are such that unless you do something very unusual, you'll never notice that there is one -- you don't normally need more than 16 textures or more than 4 rendertargets, and tex fetches as well as instructions are unlimited. Texture sizes and buffer tex sizes are also way bigger than what most people will need in their lives.

Thanks.
Is there documentation or do you know an article/ example on what things to check to be "safe" with SM3? (for the things that are gpu dependant)

Would you do that with the d3d caps structure, checking specific capabilities?

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Would you do that with the d3d caps structure, checking specific capabilities?
I'm only programming in OpenGL myself, but this looks like it might be the correct structure.

However, it seems like what MS calls "feature level" is easier, faster, more straightforward. This appears to be available with a single call to GetFeatureLevel() on your device, returning a single value.

See here for more info. The table seems somewhat wrong, since there is only SM2 followed by SM4, and no SM3 exists at all, but what you'd want what is called feature level 9_3 in this table (which is the "real" SM3 -- you can tell from number of render targets = 4 and texture size = 8192).

Dx11 has feature levels.
9 is SM2.
10 is SM4.
11 is SM5.
There is no feature level for SM3....

Dx9 has Caps, and some other APIs for checking supported formats. You can use the caps to see how many textures or MRTs you can use at once, etc.

For VTF, I forget the function name, but you need to query whether a particular vertex format is usable from vertex shaders.
From memory, there might be a cap bit saying whether VTF is supported, but most SM3 cards still require you to use one specific texture format (likely RGBA_16f). Some cards may say they support VTF in the caps, but then return false for every format when asking if that format is VTF capable...

Some SM3 cards also support a bit of SM4-level functionality -- e.g. If you try an create a texture with the format of fourcc INTZ (I.e. DWORD fmt = 'I'<<24|'N'<<16....), some cards will give you a D24S8 texture that's readable from pixel shaders like in Dx10/11...
Thanks, for now I'll just check if the number of MRT and used textures(samplers) are supported, using SM3 today as a base. This is because I'm "still using" dx9 now. I already to the same to check a list of caps that are required for current functionality.

Later this year I'll start learning dx11 (first separated from my engine, to gain knowledge). So later on I can decide what to require in my engine. This might be just SM4 and higher, to keep things simple. I have no plans for shipping, just learning, hobby, portfolio and that sort of things. This might be a wise decision to reduce complexity rather then having 2 code bases, one for dx9 sm3 and one for dx11 sm4 and higher (or worse, a third one for dx11 using sm3 if that's even possible).

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement