Jump to content
  • Advertisement
Sign in to follow this  
BradDaBug

Wierd shader performance

This topic is 4525 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

On my trusty nvidia card, this particular shader I'm using works great. But on my ATI card it runs really slowly. I think I've narrowed it down to this one particular line: fog = (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale; If I replace that line with fog = 0.5; or something like that the shader flies. What's going on? Edit: This is a GLSL fragment shader, btw.

Share this post


Link to post
Share on other sites
Advertisement
Isn't clipping done before the fragment shader runs? So if there aren't any fragments on screen the fragment shader should never run, right? Because even when there aren't any fragments on screen the shader still runs really slow.

Is it possible that it's actually the vertex shader that's slow, and that when I just set fog = 0.5 that something back in the vertex shader is getting optimized away?

Share this post


Link to post
Share on other sites
The morden card will do the Z test before the fragment shader, So you can use the occlusion query to reduce the fill rate problem.You can find a method in book <Gpu gems 2>.
About your question, the ATI card support openGL may not good as NV,because the ATI is a good member of DirectX.

Share this post


Link to post
Share on other sites
could you post the rest of the shader(s)?
changing this one line might well lead the compiler to compiler and optimise the shaders differently, resulting in a change in the performance

Share this post


Link to post
Share on other sites
Vertex shader:
void main()
{
vec4 diffuse;
vec3 lightDir;
float NdotL;

gl_TexCoord[0] = gl_MultiTexCoord0;

gl_Position = ftransform();
gl_Normal = normalize(gl_NormalMatrix * gl_Normal);

gl_FogFragCoord = gl_Position.z;

lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(dot(gl_Normal, lightDir), 0.0);

diffuse = gl_LightSource[0].diffuse;
//diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;

gl_FrontColor = NdotL * diffuse + gl_LightSource[0].ambient;
}



Fragment shader:
// This program accepts 4 textures, 3 of them for texturing and the 4th
// as an alpha map

uniform sampler2D texture0;
uniform sampler2D texture1;
uniform sampler2D texture2;
uniform sampler2D texture3;

uniform int numTextures;

void main()
{
float fog;
vec4 color;
vec4 alphacolor;
alphacolor = texture2D(texture3, gl_TexCoord[0].st / 64.0);

//color = vec4(1.0, 0.0, 0.0, 1.0);
//color = mix(texture2D(texture0, gl_TexCoord[0].st), texture2D(texture1, gl_TexCoord[0].st), alphacolor.r);

color = texture2D(texture0, gl_TexCoord[0].st) * alphacolor.r +
texture2D(texture1, gl_TexCoord[0].st) * alphacolor.g +
texture2D(texture2, gl_TexCoord[0].st) * alphacolor.b;

color *= gl_Color;

fog = (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale;
//fog = (gl_Fog.end - gl_FragCoord.z) * gl_Fog.scale;

//gl_FragColor = mix(vec4(0.0, 1.0, 1.0, 1.0), color, fog);
gl_FragColor = mix(gl_Fog.color, color, fog);
}



Edit: btw, the vertex shader compiles fine with my nvidia card, but my ATI card complains about the gl_Normal = normalize(gl_NormalMatrix * gl_Normal); line. When I comment that out the ATI card compiles it fine.

Share this post


Link to post
Share on other sites
The compile logs for all the shaders on both cards is empty. The linker logs for the nvidia card is empty, and the ATI's linker logs say "Link successful. The GLSL vertex shader will run in hardware. The GLSL fragment shader will run in hardware."

Edit: With the gl_Normal = normalize(gl_NormalMatrix * gl_Normal); line left in the vertex shader, the compile log on the ATI card is "ERROR: 0:10 'assign': l-value required "gl_Normal" (can't modify an attribute) ERROR: 1 compilation errors. No code generated."

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!