GLSL - Using texture samplers in vertex shader

Started by
9 comments, last by s_p_oneil 19 years, 7 months ago
Try as I might, I can't seem to get a vertex shader to be able to sample values from a texture. I'm using an FX 9200 which reports 4 possible texture in a vertex shader, so it should be possible. My program gets the following error message (after compiling the shader source):

Vertex info
-----------
<stdlib>(9542) : error C1115: unable to find compatible overloaded function "tex2D"
<stdlib>(9542) : error C1016: expression type incompatible with function return type
Vertex shader:


// Uniform (global) vars
uniform vec3 windDirection;
uniform sampler2D windStrengthSampler;


// Per-vertex attributes
attribute float windInfluence;


void main()
{	
	// Pass through all texture coords
	gl_TexCoord[0] = gl_MultiTexCoord0 * gl_TextureMatrix[0];
	gl_TexCoord[1] = gl_MultiTexCoord1 * gl_TextureMatrix[1];
	
	
//	vec4 colour = gl_Color;
	vec4 colour = vec4(1.0, 1.0, 1.0, 1.0);
	gl_FrontColor = colour;
	gl_BackColor = colour;


	
	// Vertex position
	vec4 pos = gl_Vertex;
	
	
	// Wind displacement
	vec2 texCoord = pos.xy / 32.0;
	vec4 windStrengthVec = texture2D(windStrengthSampler, texCoord);
	
//	vec4 windStrengthVec = texture1D(windStrengthSampler, 1.0);
//	vec4 windStrengthVec = vec4(1, 0, 0, 0);
	
	
	vec3 displacement = windStrengthVec.r * windDirection * windInfluence;
	
	pos.xyz += displacement;
	
	
	gl_Position = gl_ModelViewProjectionMatrix * pos;
}
I'm at something of a loss. The glslValidate program from 3dlabs tells me everything in my source is fine.
Advertisement
I'm allmost 100% sure that only GF6800 and x800 supports texture reads in a vertex shader, so that probably is the problem.
Quote:Original post by Anonymous Poster
I'm allmost 100% sure that only GF6800 and x800 supports texture reads in a vertex shader, so that probably is the problem.


AFAIK, only the NV4x (GF6 line) supports Vertex Shader Texture access.
-* So many things to do, so little time to spend. *-
So why does GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB return 4? The orange book lists this as having a minimum of 0, so if it wasn't supported it should be 0, not 4...
no idea, however NV havent has yet implimented it in their GLSL compiler so it wont work anyways (I'm also sure that only the NV40 class stuff can do texture reads in a vertex shader, coz they made alot of noise about it being possible..)
Anyone got anything concrete to back this up (other than vauge handwaving?)

Looks like I'll have to figure out how to do without a texture, which is going to be more than a tad tricky to do with vertex attributes.
The FX line does not support texture lookups in the vertex shader. I also believe that nVidia and ATI are both using the "reference" version of the GLSL compiler, which doesn't support any vendor-specific instructions or optimizations. (It probably just converts the code to ARBVP1 assembler.)

I recently tried writing a complex shader in GLSL, and ran into program size limitations on both the nVidia and ATI cards. I converted it to Cg, and not only did it allow me to make my shaders nearly twice as long, but it also ran significantly faster on both cards.

Although I feel that GLSL is a better language than Cg, it probably won't be worth using until the next batch of cards come out with drivers that fully support OpenGL 2.0.

Sean
Quote:Original post by OrangyTang
Anyone got anything concrete to back this up (other than vauge handwaving?)

Looks like I'll have to figure out how to do without a texture, which is going to be more than a tad tricky to do with vertex attributes.


Go to http://developer.nvidia.com/page/home and look it up. Here's a quote from that very page (note that it specifically states GeForce 6 series GPU's):

Using Vertex Textures Whitepaper Published
Since the introduction of programmability to the GPU, the capabilities of the vertex and pixel processors have been different. Now, with Shader Model 3.0, GeForce 6 Series GPUs have taken a huge step towards providing common functionality for both vertex and pixel shaders.
Quote:Original post by s_p_oneil
...I also believe that nVidia and ATI are both using the "reference" version of the GLSL compiler, which doesn't support any vendor-specific instructions or optimizations. (It probably just converts the code to ARBVP1 assembler.)


I'm not convinced thats the case at all, afaik 3DLabs only released a validation part for the GLSL syntax and code to convert it an intermediate language, anything else to get it working would be down to the driver makers.

NV seems to convert via Cg (as it allows Cg syntax in their GLSL code), I dunno what ATI do however I'm guessing neither implimentation is fully optermised as yet to work with different cards, thus the issues with speed etc. GLSL has only been availble for around 10months now, the spec has also changed a little bit, so i'm not overly suprised its not 100%. I do recall ATI hiring some compiler guys (something about DEC springs to mind) a while back, so I guess they are working on the backend and we'll see improvements over time (they do keep muttering about big things coming to OpenGL and rewrites, so that could be part of it). If the situation is still the same, say 6 months from now, then I think we'll have a problem [oh]
in the case with nvidia and glsl its still going through the teething stage at the moment, but with each new set of drivers ive noticed glsl programs run faster and more is supported eg seperate depthwrites are supported with the 66.00 drivers where theyre not with the latest oficial drivers 61.xx.
though with the gffx line access to textures in the vertexshader, + (real) conditional statements will most likely never be supported cause the hardware aint capable of it.

This topic is closed to new replies.

Advertisement