Changing my game engine's method of shader use.

Started by
3 comments, last by adder_noir 12 years, 10 months ago
Hi,

The lifelong project marches on through yet more barren wastelands of absent knowledge where knowledge should be present. Hay ho though at least I'm alive and none of this will matter in 100 years anyway, because I'll be dead - like billions who've gone before me who I'm really no different from or better than whatsoever.

So without taking anything too seriously (you just can't with computers - and anything else that interfaces reality with dreams) I'll do the usual of posting some descriptive yet dis-organised stuff to set the scene followed by a numbered list of bulleted questions summing up what I'm trying to ask. Ok go:

My engine. Not mine really, someone else's I had the sense to buy and start to learn with a huge and nasty (for the intermediate level anyway) book that came with it. Got all the way through the stuff that encapsulates the API and also got it to run. Wow, never knew I had it in me. But it uses and old method of shader preparation, and old assembly style code in the shader file itself. Here's a demo:

vs.1.1
dcl_position0 v0
dcl_normal0 v3
dcl_texcoord0 v6
m4x4 oPos, v0, c0
mov oD0, c4
mov oT0, v6


Basic shader. Pretty obsolete method by now I understand. Also uses quite a complex way of loading and assembling the shader from a file to which I won't go into here. Anyway I was looking through a Direct X tutorial online and worked my way through the HLSL shader stuff there. Much easier to use, simply calls an DX function called D3DXCreateEffectFromFile(....). Just needs a pointer to an effect file ready before calling this function which is then passed as a parameter to said function.

So from there I'm thinking yeah ok that should be easy enough to change. Just goto the parts of the engine that do the loading and change them. Then goto the parts of the engine that loads the CPU calculated WorldViewProjection Matrix into the shader and change it to match the current protocol. Obviously take care of some class based scoping stuff to make sure everything can get access to everything. Fine.

But, as always here's the catch - the method by which the DX tutorial writer handles vertex defines and the way my engine's author handles them are different.

The engine's author uses FVF for the fixed function pipeline code path, and vertex declarations for the shader code path. Ok fine.

The DX tutorial author does not declare any kind of vertex define ahead of firing up the shader, niether vertex declarations nor FVF's. it appears to me looking at the HLSL code the tutorial has that the vertex declaration is somehow handled when the vertices arrive at the GPU. I don't like this method, and it would screw up alot of how my choice of engine works too. I'd either need a nasty hack method to get around it or a complete re-write of God alone knows how many inter-related functions. Bad.

In the tutorial code I also noticed that DirectX is fire up in the following way with these parameters:

d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
hWnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING, // this bit is bothering me
&d3dpp,
&d3ddev);


I don't like the behaviour flag D3DCREATE_SOFTWARE_VERTEXPROCESSING. This looks bad, I thought anything imitated in software was bad, that's surely why the graphics card is there to avoid this with hardware. Out of interest here is the HLSL demo shader:

float4x4 World;
float4x4 View;
float4x4 Projection;

struct VertexOut
{
float4 Pos : POSITION;
float4 Color : COLOR;
};

VertexOut VShader(float4 Pos : POSITION)
{
VertexOut Vert = (VertexOut)0;
float4x4 Transform;

Transform = mul(World, View);
Transform = mul(Transform, Projection);
Vert.Pos = mul(Pos, Transform);

Vert.Color = float4(1, 1, 1, 1);

return Vert;
}

technique FirstTechnique
{
pass FirstPass
{
Lighting = FALSE;
ZEnable = TRUE;

VertexShader = compile vs_2_0 VShader();
}
}


I think if you can follow this thread this far you can see where this is going, so I'll summarise with questions below:

1) What exactly do I need to do to inform the hardware of the type of Vertex I am using when using HLSL shaders? Can I use a FVF format, or is it Vertex Declarations only? Can I still use the function SetVertexDeclaration(....) to inform the hardware of what type of vertex I'm using?

2) Following on from 1) is it possible to run HLSL stuff without defining a vertex format first through either FVF's or VD's? Not I'd want to anyway, I assume this on the fly idea is slower than pre-informing.

3) Is the HLSL method of using an effect pointer and the D3DXCreateEffectFromFile(....) function still the most flexible and widely accepted method of doing this? Is it ok for me to continue with it?

Other than that I'm ok with what's going on. I understand all the other basic shader stuff fine. Just need to use the SetMatrix(....) function instead of the SetVertexShaderConstant(....) function for sending stuff from the application to the shader which I assume is in the VRAM on the card.

That's it for now. Pretty long topic sorry :unsure:

I'd be very grateful for any replies. In fact if anyone who takes the time to write a good reply, or just gives me some invaluable information I can offer remuneration in the form of small fishing tackle items. Our family sells stuff like this online. Swivels, hooks, plastic products pretty much whatever you want.. for free :D

Thanks so much :):wink:
Advertisement
1. You can use either, but I'd strongly recommend that you just stick with vertex declarations. They're much more flexible, and IMO easier to use anyway.
2. You have to set a vertex declaration, since it allows the runtime to creating a mapping between the layout of data inside your vertex buffer and the inputs to your vertex shader. Otherwise it can't create those mappings, and there's no way for your vertex shader to get the data it needs.
3. The Effects framework is totally fine to use, and is pretty flexible. At some point you may want to start understanding a bit how it works under the hood with regards to raw vertex/pixel shaders, shader constant registers, textures, and samplers so that you can debug issues when you need to.
That's an excellent reply. Would you like some fishing tackle? :D

That will save me an awful lot of trouble, as I can draw up a short plan of which way to go now and just start doing it, which is such a luxurious position to be in compared to the awful trial and error ways I've had to do some things in the past (thank God that's over).

One last question is (which I forgot to add in my first post):

4) What is the most appropriate flag to use in the CreateDevice(....) function for the 4th parameter? [font="Arial"]I would have tho[/font]ught it would be
D3DCREATE_HARDWARE_VERTEXPROCESSING not D3DCREATE_SOFTWARE_VERTEXPROCESSING.[font="Arial"] I need to be careful here and make sure I get this bit right as I'll probably never return to this bit once I've fixed it.

Thanks :)
[/font]
Neither is really more appropriate (at least under all circumstances) than the other. Software VP will give you lower performance, but the tradeoff is that it will work on (almost) anything and relaxes several limits. Hardware is the opposite, more or less. In 2011 you don't really need to worry about whether or not your hardware supports hardware VP - well, not too much anyway. There are still some integrated Intel things in business-class desktops, older laptops and low-end "multimedia PCs" that don't support hardware VP. If you want to run on those (and you may not want to) you'll need software.

You'll see a lot of D3D9 tutorials and samples using software, particularly older ones. The objective of these tutorials is to give you something that's going to work. It might not work as well as it could, but it will work.

What seems to be a standard approach is to create a hardware VP device first, and if that fails then create a software VP one. If that fails you've got a terminal error.

The other reason to use software VP - relaxing of limits - might be appropriate for some use cases these days, but in general terms it's getting really difficult to find kit that needs it.

There's also D3DCREATE_MIXED_VERTEXPROCESSING which lets you switch between software and hardware at runtime. I'd avoid it unless you have a specific need to do this kind of switching.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Well that's excellent information. Very clearly explained too. Thanks very much I can make a proper informed decision now thanks :wink:

This topic is closed to new replies.

Advertisement