# ATI Graphics Glitch

## Recommended Posts

Hi! I've recently added normal mapping to my engine using vertex and pixel shaders in .vsh and .psh format. I thought I had got it all working perfectly as it ran without problem on my NVidia based desktop computer. However when I ran exactly the same code on my laptop with an ATI card it had a horrible graphics glitch. I tested the same code on multiple PC's any with NVidia cards ran the code without any problems, but any PC with an ATI card ran it with glitches. I've loaded some screen shots from both which can be viewed at http://downtoearthgames.co.uk/page_1199482304953.html I was wondering if anyone else had ever come across this problem and managed to fix it? The actual code for my shaders is shown below. Vertex Shader:
float4x4 WorldViewProj; //our world view projection matrix
float4x4 WorldInverseTranspose; //our inverse transpose matrix
float4x4 World; //our world matrix
float4 lightPos; //our light position in object space
texture texture0; //our texture
texture texture1; //our normal map
float4 globalAmbient;

sampler2D texSampler0 : TEXUNIT0 = sampler_state
{
Texture = (texture0);
MIPFILTER = LINEAR;
MAGFILTER = LINEAR;
MINFILTER = LINEAR;
};
sampler2D texSampler1 : TEXUNIT1 = sampler_state
{
Texture = (texture1);
MIPFILTER = LINEAR;
MAGFILTER = LINEAR;
MINFILTER = LINEAR;
};

//application to vertex structure
struct a2v
{
float4 position : POSITION0;
float3 normal : NORMAL;
float2 tex0 : TEXCOORD0;
float3 tangent : TANGENT;
float3 binormal : BINORMAL;
};

struct v2p
{
float4 position : POSITION0;
float2 tex0 : TEXCOORD0;
float2 tex1 : TEXCOORD1;
float3 lightVec : TEXCOORD2;
float att : TEXCOORD3;
float4 ambient :TEXCOORD4;
};

struct p2f
{
float4 color : COLOR0;
};

void vs( in a2v IN, out v2p OUT )
{
//getting to position to object space
OUT.position = mul(IN.position, WorldViewProj);

//getting the position of the vertex in the world
float4 posWorld = mul(IN.position, World);

//getting vertex -> light vector
float3 light = normalize(lightPos - posWorld) ;

//calculating the binormal and setting the Tangent Binormal and Normal matrix
float3x3 TBNMatrix = float3x3(IN.tangent, IN.binormal , IN.normal);

//setting the lightVector
OUT.lightVec = mul(TBNMatrix, light) ;

//calculate the attenuation
OUT.att = 1/( 1 + ( 0 * distance(lightPos.xyz, posWorld) ) );

OUT.ambient = globalAmbient;

OUT.tex0 = IN.tex0;
OUT.tex1 = IN.tex0;
}

texture texture0; //our texture
texture texture1; //our normal map

sampler2D texSampler0 : TEXUNIT0 = sampler_state
{
Texture = (texture0);
MIPFILTER = LINEAR;
MAGFILTER = LINEAR;
MINFILTER = LINEAR;
};
sampler2D texSampler1 : TEXUNIT1 = sampler_state
{
Texture = (texture1);
MIPFILTER = LINEAR;
MAGFILTER = LINEAR;
MINFILTER = LINEAR;
};

struct v2p
{
float4 position : POSITION0;
float2 tex0 : TEXCOORD0;
float2 tex1 : TEXCOORD1;
float3 lightVec : TEXCOORD2;
float att : TEXCOORD3;
float4 ambient :TEXCOORD4;
};

struct p2f
{
float4 color : COLOR0;
};

void ps( in v2p IN, out p2f OUT )
{
float4 globalAmbient = IN.ambient;

//calculate the color and the normal
float4 color = tex2D(texSampler0, IN.tex0);

/*this is how you uncompress a normal map*/
float3 normal = 2.0f * tex2D(texSampler1, IN.tex1).rgb - 1.0f;

//normalize the light
float3 light = normalize(IN.lightVec) ;

//set the output color
float diffuse = saturate(dot(normal, light)) + globalAmbient;

//multiply the attenuation with the color
OUT.color = IN.att * color  * diffuse;
//OUT.color = float4 (normal,1.0);
//OUT.color = color * diffuse;
//OUT.color = color;
//OUT.color = float4(light,1.0);
//OUT.color = diffuse;
OUT.color.w = 1;
//OUT.color = globalAmbient;
}

I've tracked down where the problem is occuring to the line: OUT.lightVec = mul(TBNMatrix, light) ; the ATI cards seem unable to carry out this calculation, I've tried changing the way the TBNMatrix is calculated but to no avail, and am now at a loss. Any help would be much appreciated thanks!

##### Share on other sites
The first thing I'd check is that the MinIndex and NumVertices parameters to DrawIndexedPrimitive() are correct. I seem to remember that NVidia cards let you get away with passing almost anything for those where ATI ones don't.

Also have you enabled the debug runtimes? Do they produce any error messages?

To double check the shader you could try getting hold of the AMD GPU Shader Analyzer. That will show you what your HLSL is getting assembled into.

##### Share on other sites

In answer to your question, yes I have enabled the debug runtimes and they didn't produce any error messages.

Sorry I probably should have mentioned in the 1st post that in the engine I am using .X files to load meshes. So as far as i'm aware i'm not using the function DrawIndexedPrimitive().

It's also probably worth mentioning that even taking the .exe file compiled on a machine with an NVidia card and running it on a machine with an ATI card still produces the same glitch. When the shaders are disabled the engine runs fine on any machine.

I'll take a look at the AMD GPU Shader Analyzer and see what I can come up with.

Cheers

##### Share on other sites
If you're seeing differences between different GPU vendors then you really need to run via the Reference Rasterizer. It's incredibly slow but it will tell you which piece of hardware is actually correct - just because it appears as desired on the Nvidia hardware doesn't guarantee that the Nvidia hardware is the one behaving correctly [wink]

hth
Jack

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628401
• Total Posts
2982460

• 9
• 10
• 9
• 19
• 24