Shader help (HLSL using DX effects)

Started by
4 comments, last by helix 19 years, 8 months ago
I'm still a newbie with this HLSL and shaders stuff so I need some help. All I want to do is render some polygons to the screen. What's the bare minimum I need to do this? I'm trying to do most of my rendering in shaders instead of in the render function (FVF) like all my old programs. I am assuming that I take in a position (float4) and output a position (float4). I'm not worried at this point about textures or lighting (however, my vertices do have normals and texcoords -- do they need to be in the vertex shader parameters?). So with the position I'm passed in, I need to mul it by the world, view, and projection matrices in that order right? What am I screwing up because it's not working (I have some white polys that are clearly not in the right view as they appear only in certain camera angles and as the camera moves direction or position, they stretch and distort until they disappear).
Advertisement
Take a look at the first chapter of the first volume of Shader X2. It's freely available at ATI's developer site. It gives a good introduction to shaders using HLSL.


neneboricua

Edited by Coder: Clicky [smile]

[Edited by - Coder on August 4, 2004 4:46:06 AM]
I read that entire article and it was a good read. I figured out some things I was wondering about and learned some new things. But it didn't solve my problem so I guess I'll have to take a look at my code some more. There were a few things I have been wondering about (and even asked in my original post -- though not very clearly) that were not covered or were glossed over. I'll list them out here below:

1) (this is my biggest confusion) Regarding vertex and pixel shader input/output semantics, how do I know which semantic to use for what?
a) For example, how do I know if the data I am looking for will be in TEXCOORD, TEXCOORD4, or POSITION963? I assume the tangent data doesn't just magically appear in TANGENT3, it would have to be set up to go into that position somehow. What I think is the case is that in my vertex definition, I have three floats, x,y,and z specified as the position and those are in POSITION0. If I have some more data specified as position, it would be in POSITION1. Same thing for texture coordinates, etc. Am I on the right track? What's the difference between POSITION and POSITION0? Does POSITION just default to POSITION0?
b) What's the limit for the numbers I tack onto the end of these semantics?
c) When I'm outputting data from my vertex shader, I've just been piggybacking things like the normal and the view position onto some COLOR or TEXCOORD semantics and it works fine but I would rather make it more intuitive by using POSITION1 or something like that. Is this because the pixel shader inputs are only COLORn and TEXCOORDn?

2) With DXEffects, what does the code defined with a sampler mean?

So with the following code:
Texture tex;sampler texSampler = sampler_state{ Texture = (tex); MinFilter = Linear; MagFilter = Linear; MipFilter = Point; AddressU = Clamp; AddressV = Clamp; AddressW = Wrap; MaxAnisotrophy = 16;};


a) What does MinFilter, MagFilter, and MipFilter mean/do?
b) What are all the possible settings for them and what do they mean?
c) What is MaxAnisotrophy?
d) I listed out all the sampler attributes that I know about, are there more? What are they?

3) What exactly is the saturate intrinsic used for (it was mentioned a number of times in that article)? I know what it is -- saturate(x) clamps x to the range [0,1] -- but why would I want to do that? ie. an example would be helpful.

I have more questions about HLSL but these are the biggest nagging ones (I'm tired of just guessing). I'm starting to get the hang of this stuff. Thanks!
Probably one of the best ways to answer all the shader questions you have would be to look into 2 things:

(1) Nvidia SDK 8.0 - Has *tons* of shaders - both complex and simple ones.

(2) FX Composer 1.5 - An awesome shader IDE (also, comes with more shader examples)

As far as semantics go, a FX Composer help page defines almost (if not all) of them. Most of them are very self explanatory. For example, worldviewprojection is the world matrix multiplied by the view matrix multiplied by projection matrix. You are free to make up your own semantics, but Microsoft has a list of suggested ones (Nvidia abides by these). Remember, you can use ID3DXEffect::GetParameterBySemantic() to retrieve a constant, based on it's semantic name.

Many of the semantics (TEXCOORD, for example) can have an optional number appended (TEXCOORD1). These numbers correspond to the Usage byte of D3DVERTEXELEMENT9. IIRC, the limit of these is defined by the hardware (my 9800xt can have TEXCOORD0 through TEXCOORD8).

All of the sampler stuff is mirrored by IDirect3DDevice::SetSamplerState(). All of the settings are of the D3DSAMPLERSTATETYPE enumerated type (for example, MinFilter == D3DSAMP_MINFILTER).
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Quote:Original post by beoch
1) (this is my biggest confusion) Regarding vertex and pixel shader input/output semantics, how do I know which semantic to use for what?
a) For example, how do I know if the data I am looking for will be in TEXCOORD, TEXCOORD4, or POSITION963? I assume the tangent data doesn't just magically appear in TANGENT3, it would have to be set up to go into that position somehow. What I think is the case is that in my vertex definition, I have three floats, x,y,and z specified as the position and those are in POSITION0. If I have some more data specified as position, it would be in POSITION1. Same thing for texture coordinates, etc. Am I on the right track? What's the difference between POSITION and POSITION0? Does POSITION just default to POSITION0?

In general, if you look up "Shader Semantics" in the SDK docs, you'll find a page that talks about all the input/output semantics of shaders.

As the developer, you should have information about the kind of data that your models contain. Your shaders are written expecting that the vertex stream will have certain components. Most models have at least a position, normal, and a set of 2D texture coordinates.

You're right in that tangent data doesn't just magically appear when you write a shader that uses TANGENT as a semantic. The tangent (and binormal) vectors associated with a vertex need to be part of the vertex stream. This is usually done either at model creation time or at load time. With the case of tangent and binormal vectors, you can call D3DXComputeTangent at load time if you're using .x files for your meshes. To successfully call this function, the mesh's vertex declaration needs to contain either a binormal field or a tangent field at least. You do this by cloning the mesh by calling ID3DXMesh::CloneMesh with an appropriate vertex declaration.

This is generally the case with all the semantics in shaders. Your application has to specfically make sure that the data required by the shaders is available.

As far as the POSITION semantic goes, it is normally reserved to store the original position of the vertex coming into the model. Notice that you can input multiple positions into your vertex shader but may only output a single POSITION. This is because the output of the vertex shader is used to actually render the geometry on the screen.
Quote:Original post by beoch
c) When I'm outputting data from my vertex shader, I've just been piggybacking things like the normal and the view position onto some COLOR or TEXCOORD semantics and it works fine but I would rather make it more intuitive by using POSITION1 or something like that. Is this because the pixel shader inputs are only COLORn and TEXCOORDn?

If you want to pass some data to your pixel shader from your vertex shader, you'll usually want to use TEXCOORD[n] semantics. Even though it could be that the information you're trying to pass to your pixel shader is really the position of the vertex in world space, you should use the TEXCOORD semantic for this because on more recent graphics cards (as in ps_1_3 and above I believe) texture coordinates can be any floating point value and will be properly interpolated by the graphics card. Most of the data you pass to your pixel shader will be through either the TEXCOORD semantic or through constants.

And yes, whenever you have a semantic without a number attached (i.e. TEXCOORD), it is the same as specifying 0 (i.e. TEXCOORD0).
Quote:Original post by beoch
3) What exactly is the saturate intrinsic used for (it was mentioned a number of times in that article)? I know what it is -- saturate(x) clamps x to the range [0,1] -- but why would I want to do that? ie. an example would be helpful.

Well, one example is the computation of diffuse light by using the dot product. To calculate diffuse light, you need to compute the dot product of the surface normal and the light vector:

dot( Normal, LightVector )

The thing is that if the angle between these two vectors is greater than 90 degrees, the dot product will come out negative. This is usually not what you want because a negative dot product will seem to take light away from your models. So you can use the "saturate" intrinsic to clamp the results to the range of [0,1] to get more realistic effects.

Another example is simply clamping the final color output from the pixel shader. Unless you're using some advanced lighting techniques, you'll usually want the color returned from your pixel shader to be in the range [0,1] because that is normally the only range of colors that can be displayed on your screen.

I think Dustin answered most of the questions related to the samplers so you should be good to go on that one.

Whew... long post. Hope all this helps,
neneboricua
I went back and looked over some documentation and I think I got all my first question answered. It really makes a lot of sense once you figure it out. ;-) Unfortunately (or fortunately?) my code is set up correctly with the vertex stream defined appropriately. So I am nearing a dead end as to why my polys are getting rendering wrong.

I'll look at the help page tomorrow. I'm not too up on all the stuff like D3DSAMP_MINFILTER so that's why I don't recognize it.

Good description of the saturate intrinsic. That's pretty much what I was thinking. I guess it just is poorly named IMO.

The more I learn about HLSL, the more I like it. The casting is really nice, swizzling is really nice, and the output parameters are nice just to name a few things. Now I just need more practice.

This topic is closed to new replies.

Advertisement