Using Vertex Shaders instead of fixed function for entire engine a good idea?

Started by
13 comments, last by DrGUI 18 years, 2 months ago
Until now, my 2.5D game engine has been using the regular fixed function pipeline for everything. I've been reading a bit on shaders and I like them quite a bit. In fact, I'm considering replacing the fixed function rendering with .fx files. For users who don't care, it would all be the same "black box", but for those who want to get into shaders or modify how things look (and perhaps write some cool effects), shaders would allow this. Of course, it also opens the doors for more expansion in the future. I'm just wondering if there are any things I should be careful about. I realize that some older and integrated gfx cards don't have modern shader support, but for the time being, I plan on doing the most basic stuff like transforming vertices and calculating lighting. Are there any other things that might cause problems? Are shaders slower than using the fixed function pipeline? Again, I'm pretty new to shaders so I'm concerned there might be something that I don't know about that may cause problems. Thanks for any tips, --Vic--
Advertisement
Don't FX files let you specify a fallback technique if shaders aren't supported?

Quote:
Are shaders slower than using the fixed function pipeline?

If they are forced to be emulated in software, then yes.

But I have heard that the latest generation of NVIDIA and ATI cards don't really have a FFP, they just translate FFP states into an optimized shader behind the scenes. (Can anyone confirm that?) In that case, a well written shader would run just as fast as FFP.
Quote:
Don't FX files let you specify a fallback technique if shaders aren't supported?

I believe they don't if no shaders are supported, but you can fallback to previous shader models so that should not be much of a problem.

Also, I get it that the next DX version excludes the whole FFP and does everything in the programmable pipeline. In this light, it is a good idea to move along.

Illco
Vertex Shaders are a good move in my opinion. But if you have a fully working version in the FFP, then you do have to wonder if you'll get a return on your investment.

Hardware support is pretty good these days - some cards such as the integrated chipsets and GeForce 4MX series don't have hardware VShaders, but they can be emulated by the CPU. I've read that on some laptops, it's actually faster to let the CPU emulate them than let the hardware deal with it [oh]

As far as opening doors for "cool new effects" - that may be a bit optimistic. As in, you have to design the software side of your program to handle a completely arbitrary effect. For example - if you don't feed tangent/binormal/normal data into the shader you'll rule out a whole class of effects. You may want to look at the "DirectX Standard Annotations and Semantics" idea.

Quote:Don't FX files let you specify a fallback technique if shaders aren't supported?

Yup, that wouldn't be a bad way of implementing things in this case [smile]

Quote:I have heard that the latest generation of NVIDIA and ATI cards don't really have a FFP, they just translate FFP states into an optimized shader behind the scenes. (Can anyone confirm that?)

I was reading this interview recently, where the ATI guys state:
Quote:...all DX7 & DX8 fixed function operations (now shader operations)...


hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Yeah, I also read the article about DX10 not having a FFP, which is another reason I thought I'd get familiar with shaders before it becomes a necessity.

As for the effects comment, I realize that if the buffer fed to the shader doesn't have certain properties, then that rules out effects. Naturally, as shaders "grow" in the engine, I will give more freedom to the user regarding what kind of vertices to use for their objects and so on. But for now, I'm just looking to make a gradual transition.

Also, how horrible of an assumption is it to make that the user has Vertex Shader Versio 1.1? I don't know when the different versions were introduced and when they started to get picked up by the hardware manufacturers.

Thanks again,

--Vic--
Quote:Original post by Roof Top Pew Wee
Also, how horrible of an assumption is it to make that the user has Vertex Shader Versio 1.1? I don't know when the different versions were introduced and when they started to get picked up by the hardware manufacturers.

VS 1.1 was first on the GeForce 3's and Radeon 7500's iirc. About 3 years ago maybe?

You need to define your target audience before you can say if it's a good/bad assumption - it'd be absolutely fine for an audience that plays games, but probably a bit risky if you're targetting the MS-Office and laptop arena...

Have a look at Valve's survey.

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Original post by jollyjeffers
I've read that on some laptops, it's actually faster to let the CPU emulate them than let the hardware deal with it
In fact, a few of the Intel chipsets have PS hardware but not VS. So when you request hardware VP, you get software VP anyway, but in the driver instead of in D3D.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Promit
In fact, a few of the Intel chipsets have PS hardware but not VS. So when you request hardware VP, you get software VP anyway, but in the driver instead of in D3D.

I've used this chipset (915G) and it runs well with software vshaders. in fact, we dont use any FFP, just sw vshaders (vs1.1) for all dx7 parts. you could even use vs2.0 sw and fixed pixel on dx7 parts - since the vs is emulated anyway, you can get some better loop performance (early outs and so on) from vs2.0

in hindsight, i'd probably skip the whole shader model 1 route altogether, and just use vs2.0 in sw, and fixed pixel on all dx7/dx8 parts. the headaches of supporting the shader model 1 path are barely worth it.
Interesting. Actually, the engine only uses ManagedDX, so I don't have to worry about older versions of DX.

--Vic--
One method you could do, is to check the capabilities of the card when your engine is starting up. That way, you could have the fallbacks from one version of shader to the next. Also, maybe give the user an option to choose which version of shaders they want to run.

This topic is closed to new replies.

Advertisement