GLSL texture2D problem.

Started by
2 comments, last by _the_phantom_ 18 years, 9 months ago
I've asked this elsewhere but no one has been able to help me. I have a fragment shader... uniform sampler2D bTexture,tTexture,lTexture; float maxC; vec4 tColor1,tColor2,lColor; void main(void){ tColor1=texture2D(bTexture,vec2(gl_TexCoord[0])); tColor2=texture2D(tTexture,vec2(gl_TexCoord[1])); lColor=texture2D(lTexture,vec2(gl_TexCoord[2]))+((gl_Color)-0.5); gl_FragColor=vec4(vec3(mix(tColor1,tColor2,tColor2.a)),tColor1.a+tColor2.a)*lColor; } This code works perfectly on Geforce cards. Problem is that it runs slowly on Radeon cards. If I remove the call to texture2D the problem goes away. So I think the problem is there. But I'm unsure what's causing the problem.
Advertisement
You shouldn't query the UV coords from the fragment shader.
Make a varying variable to store the UV coords and then use that varying from the fragment shader.
Noted, but that doesn't solve the problem. If I do something like

tColor1=texture2D(bTexture,vec2(0.0,0.0));

The slowdown still occures.
what texture format are you using on the texture you are sampling from?

This topic is closed to new replies.

Advertisement