Wierd shader performance

Started by
9 comments, last by BradDaBug 18 years, 3 months ago
On my trusty nvidia card, this particular shader I'm using works great. But on my ATI card it runs really slowly. I think I've narrowed it down to this one particular line: fog = (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale; If I replace that line with fog = 0.5; or something like that the shader flies. What's going on? Edit: This is a GLSL fragment shader, btw.
I like the DARK layout!
Advertisement
Isn't clipping done before the fragment shader runs? So if there aren't any fragments on screen the fragment shader should never run, right? Because even when there aren't any fragments on screen the shader still runs really slow.

Is it possible that it's actually the vertex shader that's slow, and that when I just set fog = 0.5 that something back in the vertex shader is getting optimized away?
I like the DARK layout!
The morden card will do the Z test before the fragment shader, So you can use the occlusion query to reduce the fill rate problem.You can find a method in book <Gpu gems 2>.
About your question, the ATI card support openGL may not good as NV,because the ATI is a good member of DirectX.
could you post the rest of the shader(s)?
changing this one line might well lead the compiler to compiler and optimise the shaders differently, resulting in a change in the performance
Vertex shader:
void main(){	vec4 diffuse;	vec3 lightDir;	float NdotL;	gl_TexCoord[0] = gl_MultiTexCoord0;		gl_Position = ftransform();	gl_Normal = normalize(gl_NormalMatrix * gl_Normal);		gl_FogFragCoord = gl_Position.z;		lightDir = normalize(vec3(gl_LightSource[0].position));	NdotL = max(dot(gl_Normal, lightDir), 0.0);			diffuse = gl_LightSource[0].diffuse;	//diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;			gl_FrontColor =  NdotL * diffuse + gl_LightSource[0].ambient;}


Fragment shader:
// This program accepts 4 textures, 3 of them for texturing and the 4th// as an alpha mapuniform sampler2D texture0;uniform sampler2D texture1;uniform sampler2D texture2;uniform sampler2D texture3;uniform int numTextures;	void main(){	float fog;	vec4 color;	vec4 alphacolor;	alphacolor = texture2D(texture3, gl_TexCoord[0].st / 64.0);			//color = vec4(1.0, 0.0, 0.0, 1.0);	//color = mix(texture2D(texture0, gl_TexCoord[0].st), texture2D(texture1, gl_TexCoord[0].st), alphacolor.r);		color = texture2D(texture0, gl_TexCoord[0].st) * alphacolor.r +		texture2D(texture1, gl_TexCoord[0].st) * alphacolor.g +		texture2D(texture2, gl_TexCoord[0].st) * alphacolor.b;			color *= gl_Color;			fog = (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale;	//fog = (gl_Fog.end - gl_FragCoord.z) * gl_Fog.scale;	//gl_FragColor = mix(vec4(0.0, 1.0, 1.0, 1.0), color, fog);	gl_FragColor = mix(gl_Fog.color, color, fog);}


Edit: btw, the vertex shader compiles fine with my nvidia card, but my ATI card complains about the gl_Normal = normalize(gl_NormalMatrix * gl_Normal); line. When I comment that out the ATI card compiles it fine.
I like the DARK layout!
can you post all the log dumps as well.. both ATI and NV
The compile logs for all the shaders on both cards is empty. The linker logs for the nvidia card is empty, and the ATI's linker logs say "Link successful. The GLSL vertex shader will run in hardware. The GLSL fragment shader will run in hardware."

Edit: With the gl_Normal = normalize(gl_NormalMatrix * gl_Normal); line left in the vertex shader, the compile log on the ATI card is "ERROR: 0:10 'assign': l-value required "gl_Normal" (can't modify an attribute) ERROR: 1 compilation errors. No code generated."
I like the DARK layout!
even when you leave the stuff in ATI's compiler complains about?
You beat my edit :)
I like the DARK layout!
indeed [grin]

Well, the ATI compiler has that right, attributes are read only, so attempting to assign back to one is going to trip you up.

This topic is closed to new replies.

Advertisement