[D3D9/HLSL] Simple diffuse light shader...

Started by
25 comments, last by Buckeye 14 years, 8 months ago
I'm trying to achieve a simple diffuse light effect through HLSL vertex and pixel shader. My shader code is pretty much taken from a Gamasutra article, just slightly modified, take a look at this .fx file:

float4x4 matWorldViewProj;
float4x4 matWorld;
float4 vecLightDir;

struct VS_OUTPUT
{
    float4 Pos : POSITION;
    float2 UV : TEXCOORD0;
    float3 Light : TEXCOORD1;
    float3 Norm : TEXCOORD2;
};

VS_OUTPUT VS(float4 Pos : POSITION, float2 UV : TEXCOORD0, float3 Normal : NORMAL)
{
    VS_OUTPUT Out = (VS_OUTPUT)0;
    Out.Pos = mul(Pos, matWorldViewProj); // transform Position
    Out.UV = UV; // pass UV coordinates on
    Out.Light = normalize(vecLightDir); // output light vector
    Out.Norm = normalize(mul(Normal, matWorld)); // transform Normal and normalize it
    return Out;
}

float4 PS(float2 UV : TEXCOORD0, float3 Light : TEXCOORD1, float3 Norm : TEXCOORD2, sampler2D tex0) : COLOR
{
    float4 diffuse = tex2D ( tex0, UV );
    float4 ambient = {0.05, 0.05, 0.075, 1.0};
    return ambient + diffuse * saturate(dot(Light, Norm));
}

technique EntryPoint
{
    pass SinglePass
    {
        VertexShader = compile vs_2_0 VS ( );
	PixelShader = compile ps_2_0 PS ( );
    }
}


As you can see, I pretty much only added texture mapping (btw. this didn't make a difference, the results with the original code from the article and my modified version yield the same lighting results/problems). My drawing code in the application looks like this:

void Render ( )
{
	unsigned int num_passes;

	// clear the back- and z-buffer
	device->Clear ( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB ( clear_color.r, clear_color.g, clear_color.b ), 1.0f, 0 );

	effect->Begin ( &num_passes, 0 );

	for ( unsigned int pass = 0; pass < num_passes; pass++ )
	// loop through all passes
	{
		effect->BeginPass ( pass );

		DWORD old_fvf = NULL;
	
		// get current flexible vertex format
		device->GetFVF ( &old_fvf );

		// set format for our models
		device->SetFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX2 );

		std::vector<Model*>::iterator i;
		for ( i = model.begin ( ); i < model.end ( ); i++ )
		// loop through all models
		{
			// apply the matrix transformations for the model
			(*i)->ApplyMatrixTransformations ( device );
				
			// now get combined matrix (ie. the world-view-projection matrix)
			D3DXMATRIX combined_matrix = GetCombinedMatrix ( );

			effect->SetMatrix ( "matWorldViewProj", &combined_matrix );

			D3DXMATRIX world_matrix;

			// get world matrix
			device->GetTransform ( D3DTS_WORLD, &world_matrix );

			effect->SetMatrix ( "matWorld", &world_matrix );

			D3DXVECTOR4 light_vector ( 0.0f, 0.0f, 1.0f, 0.0f );

			effect->SetVector ( "vecLightDir", &light_vector );

			// make sure all values send to the shader are updated
			effect->CommitChanges ( );

			// render model
			(*i)->Render ( device );
		}

		// reset to old format
		device->SetFVF ( old_fvf );

		effect->EndPass ( );
	}

	effect->End ( );
}


As you can see, at the moment, I just hardcode the light vector so I should get a simple direction light. What I get, however, is relatively weird: These are two shots of my 3-test-cubes model. If nothing is wrong with the shader (I'd need some confirmation there), then it has to be the normals, I guess. I calculate them using the lib3ds functionality, like this:

for ( int i = 0; i < file->nmeshes; i++ )
// loop through all meshes+
{
	float ( *normals )[3] = ( float (*)[3] ) malloc ( 3 * 3 * sizeof ( float ) * file->meshes->nfaces );

	// calculate the normals for all vertices in the mesh
	lib3ds_mesh_calculate_vertex_normals ( file->meshes, normals );
}
Then, when I create my vertex buffer, I retrieve the proper normals from the "normals" array and add them to the vertex buffer. I also debug display the normals from that array when I'm loading the model and they look very correct, check out this output for one of the cubes (ie. one submesh):

Mesh 0 includes 26 vertices...

Format:
i:  x, y, z               / u, v / nx, ny, nz

0: -9.999, -9.999, 0.001 / 1, 1 / 0, 0, -1
1: 10.001, -9.999, 0.001 / 0, 1 / 0, 0, -1
2: -9.999, 10.001, 0.001 / 1, 0 / 0, 0, -1
3: 10.001, 10.001, 0.001 / 0, 0 / 0, 0, -1
4: -9.999, -9.999, 20.001 / 0, 1 / 0, 0, -1
5: 10.001, -9.999, 20.001 / 1, 1 / 0, 0, -1
6: -9.999, 10.001, 20.001 / 0, 0 / 0, 0, 1
7: 10.001, 10.001, 20.001 / 1, 0 / 0, 0, 1
8: -9.999, -9.999, 0.001 / 0, 1 / 0, 0, 1
9: 10.001, -9.999, 0.001 / 1, 1 / 0, 0, 1
10: 10.001, -9.999, 20.001 / 1, 0 / 0, 0, 1
11: 10.001, -9.999, 20.001 / 1, 0 / 0, 0, 1
12: -9.999, -9.999, 20.001 / 0, 0 / 0, -1, 0
13: -9.999, -9.999, 0.001 / 0, 1 / 0, -1, 0
14: 10.001, 10.001, 0.001 / 1, 1 / 0, -1, 0
15: 10.001, -9.999, 20.001 / 0, 0 / 0, -1, 0
16: 10.001, 10.001, 0.001 / 0, 1 / 0, -1, 0
17: -9.999, 10.001, 0.001 / 1, 1 / 0, -1, 0
18: -9.999, 10.001, 20.001 / 1, 0 / 1, 0, 0
19: -9.999, 10.001, 20.001 / 1, 0 / 1, 0, 0
20: 10.001, 10.001, 20.001 / 0, 0 / 1, 0, 0
21: 10.001, 10.001, 0.001 / 0, 1 / 1, 0, 0
22: -9.999, 10.001, 0.001 / 0, 1 / 1, 0, 0
23: -9.999, -9.999, 20.001 / 1, 0 / 1, 0, 0
24: -9.999, -9.999, 20.001 / 1, 0 / 0, 1, 0
25: -9.999, 10.001, 0.001 / 0, 1 / 0, 1, 0
As I said, looks fine to me. So what could be wrong? Any help is appreciated, as always and thanks for your time.
Advertisement
Shouldn't you be using D3DFVF_TEX1 instead of D3DFVF_TEX2?

EDIT: I've also just noticed you are normalising your light vector and normals in the vertex shader. This should really be done in the pixel shader. Although I don't think it's going to fix your output.

[Edited by - adt7 on August 19, 2009 9:12:14 AM]
Did that, dunno how that happened.

Doesn't change anything tho. :(

Further ideas?
Hm. Maybe you are accidentally translating your normal. Does this make a difference?
Out.Norm = normalize(mul(Normal, (float3x3)matWorld));
Thanks for the reply, unfortunately it doesn't make a difference either.
When you are done making your mesh (vertex buffer), call ComputeNormals on it. Make sure the vertex declaration of the vertex buffer is accurate.

Some off the wall questions, does your video card handle Shader 2.0? It doesn't look like you're doing anything that requires shader 2, so compile to shader 1 just to see. If still there, go back to 2. Just a thought.

Are you on a nVidia, ATI or Intel card? I know that when declaring semantics and such, things get funky between the 3 cards... nVidia is generally more intelligent when handling shaders (IMO, but that's a different thread altogether).

Also, try the shader in FX Composer. Does it look correct? If it does, the issue is in the app code. If it does not, the issue is in the shader.

Just trying to give you some new angles of approach.
Enoch DagorLead DeveloperDark Sky EntertainmentBeyond Protocol
One other thing...

I've never passed a sampler as a parameter before. Try declaring it within the shader something like:

texture diffuseTexture : Diffuse
<
string ResourceName = "default_color.dds";
>;

sampler TextureSampler = sampler_state
{
texture = <diffuseTexture>;
AddressU = CLAMP;
AddressV = CLAMP;
AddressW = CLAMP;
MIPFILTER = LINEAR;
MINFILTER = LINEAR;
MAGFILTER = LINEAR;
};

then remove the sampler param and replace
float4 diffuse = tex2D ( tex0, UV );

with
float4 diffuse = tex2D ( TextureSampler, UV );

Other than that, the shader runs fine.
Enoch DagorLead DeveloperDark Sky EntertainmentBeyond Protocol
Quote:Original post by EnochDagor
When you are done making your mesh (vertex buffer), call ComputeNormals on it.


How exactly would I do that? Call ComputeNormals on what? I couldn't find much about this, the MSDN only mentions this as a function for .x meshes, I don't use these?

Quote:Original post by EnochDagor
Make sure the vertex declaration of the vertex buffer is accurate.


Well, as you can see, before rendering the model I call:

device->SetFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX1 );


And when I create the vertex buffer for a model, I call that like this:

// create the vertex buffer	device->CreateVertexBuffer ( sizeof ( Vertex ) * total_vertices, 0, D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX1, D3DPOOL_MANAGED, &vertex_buffer, NULL );


Same FVF, is that what you mean by accurate?


Quote:Original post by EnochDagor
Some off the wall questions, does your video card handle Shader 2.0? It doesn't look like you're doing anything that requires shader 2, so compile to shader 1 just to see. If still there, go back to 2. Just a thought.

Are you on a nVidia, ATI or Intel card? I know that when declaring semantics and such, things get funky between the 3 cards... nVidia is generally more intelligent when handling shaders (IMO, but that's a different thread altogether).


Running an NVidia GeForce GTX 260, so shader model 2 isn't a problem.


About the rest: I can't really have the texture name in the shader for my purposes, also the texture mapping does work absolutely fine, so I'm not convinced that this is the cause for my problem, to be honest. I don't want to sound disrespectful and I'm thankful for your time and effort and the good points you make, but I just don't see how that could be related to my normal/lighting problem.
change
device->SetFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX2 );

to
device->SetFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX1 );
Enoch DagorLead DeveloperDark Sky EntertainmentBeyond Protocol
Dang, that was from an old clipboard or something, it already is TEX1, I don't use TEX2 anywhere in the project. Sorry for the confusion.

This topic is closed to new replies.

Advertisement