Jump to content
  • Advertisement
Sign in to follow this  
Florian22222

[GLSL] NVIDIA vs ATI shader problem

This topic is 2026 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello!

 

I am trying to get my engine running on ATI/AMD(previously i only ran it on NVIDIA).

 

My shader uses diffuse lighting to render a sphere and a plane(both with same material).

This works totally fine on NVIDIA graphics cards.

On ATI it compiles, but shows really weird results when executed.

The sphere appears to have some issue with fetching texture and the plane does not appear at all.

I also tried compiling it with AMDs Shader Analyzer and it did compile without errors or warnings.

 

Here is the vertex shader:

uniform mat4 gWVP;
uniform mat4 gNormalMatrix;
uniform mat4 gModelViewMatrix;

in vec3 inPosition;
in vec3 inNormal;
in vec2 inTexcoord;
in vec3 inTangent;

out vec2 vTexcoord;

//all of these are in eyespace
out vec3 vPosition;
out vec3 vNormal;
out vec3 vTangent;

void main()
{
	gl_Position = gWVP * vec4(inPosition,1.0);
	
	vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0)).xyz);
	vTangent = normalize((gNormalMatrix * vec4(inTangent,0.0)).xyz);

	vPosition = (gModelViewMatrix * vec4(inPosition,1.0)).xyz;

	vTexcoord = inTexcoord;
}

and fragment shader:

#version 130

const int LIGHT_TYPE_NONE = 0;
const int LIGHT_TYPE_DIRECTIONAL = 1;
const int LIGHT_TYPE_POINT = 2;

struct Light{
	int lightType;
	vec3 position;
	vec4 diffuse;
	float intensity;
	float constantAttenuation;
	float linearAttenuation;
	float quadraticAttenuation;
};

const int NUM_LIGHTS = 4;
uniform Light gLights[NUM_LIGHTS];
uniform vec4 gGlobalAmbient;

vec4 calculateDiffuse(Light light, vec4 surfaceColor, vec3 normal, vec3 lightDir)
{
	vec4 outColor = vec4(0.0);

	vec3 normalizedLightDir = normalize(lightDir);
	float NdotL = max(dot(normal,normalizedLightDir),0.0);

	if(light.lightType == LIGHT_TYPE_DIRECTIONAL)
	{
		if (NdotL > 0.0) {
			outColor += surfaceColor * light.diffuse * light.intensity * NdotL;
		}
	}else if(light.lightType == LIGHT_TYPE_POINT)
	{
		float dist = length(lightDir);
		if (NdotL > 0.0) {
			float attenuation = 1.0 / (light.constantAttenuation +
						light.linearAttenuation * dist +
						light.quadraticAttenuation * dist * dist);

			outColor += surfaceColor * light.diffuse * light.intensity * attenuation * NdotL;
		}
	}

	return outColor;
}

uniform sampler2D gMainTexture;

in vec2 vTexcoord;

in vec3 vPosition;
in vec3 vNormal;
in vec3 vTangent;

out vec4 outColor;

void main (void)  
{
	vec4 texDiffuse = texture(gMainTexture,vTexcoord);

	vec3 normal = normalize(vNormal);
	vec3 tangent = normalize(vTangent);
	vec3 bitangent = cross(normal, tangent);

	mat3 tangentSpaceMatrix = mat3(
		tangent.x, bitangent.x, normal.x,
		tangent.y, bitangent.y, normal.y,
		tangent.z, bitangent.z, normal.z
	);

	//ambient
	outColor = texDiffuse * gGlobalAmbient;

	for(int i = 0;i<NUM_LIGHTS;i++)
	{
		vec3 lightDir = vec3(0.0);
		if(gLights[i].lightType == LIGHT_TYPE_DIRECTIONAL)
		{
			lightDir = normalize(tangentSpaceMatrix * gLights[i].position);
		}else if(gLights[i].lightType == LIGHT_TYPE_POINT)
		{
			lightDir = tangentSpaceMatrix * (gLights[i].position - vPosition);
		}

		if(gLights[i].lightType != LIGHT_TYPE_NONE)
		{
			outColor += calculateDiffuse(gLights[i],texDiffuse,vec3(0.0,0.0,1.0),lightDir);
		}
	}
	
}    

I hope someone can point out the issue to me or help me find a solution to debug it(since its no compiler error).

 

Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement

Might try using the deprecated gl_FragColor if this is a 130 shader or try explicitly declaring it "out gl_FragColor;". If that doesnt work try varying for your shader in's, ive never had problem on amd or nv with varying keyword for 130 shaders.

Share this post


Link to post
Share on other sites

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).

Share this post


Link to post
Share on other sites

I see you're using a for loop to dynamically get gLights.
With GLSL I've had problems in which a shader worked fine on my desktop PC, but on my laptop wasn't giving proper results.

 

My fix was to manually unroll the loop...very tedious but it solved the problem.
So instead of doing gLights you'd do gLights[0], gLights[1] etc explicitely.

 

A general debugging tip by the way, is to comment out complex calculations and replace them with constant values you know should work. Do this until some form of logical graphical output appears, and work your way down from there until you pinpoint the issue.

 

Hope this helps.

Share this post


Link to post
Share on other sites
Fix the errors the AMD programming reports; their GLSL implementation tends to be more correct/strict than NVs which allows more Cg like syntax through.

Share this post


Link to post
Share on other sites

spreading rumours that AMD has bugs...

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

Share this post


Link to post
Share on other sites

I tracked at least one of the issues down to this line in the vertex shader:

vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz);

If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.

 

Any idea what might cause this?

 

EDIT: I changed all constant float values to have an f after them.

Edited by IceBreaker23

Share this post


Link to post
Share on other sites

 

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

is there a need to specify GLSL version to compile shaders with? If there is an advanced opengl distrubution on a system, will deprecated lower versions of GLSL code not compile?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!