Still no Vertex texture fetching with ATI cards (4850)

This topic is 3665 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I have just replaced an ATI X1950 Pro with a ATI 4850 since the X1950 did not support vertex texture fecthing (VTF). But it still does not work with my new 4850! Is it really te case that ATI still does not support VTF? The code that implements VTF runs fine with my even older GF7600GS, but when I put the 4850 in the machine it just crashes.

Share on other sites
In this Post I see that you are using tex2D(heightmap, IN.uv) to read the height but in the vertex shader the GPU doesn't know which level of detail (mipmap) to use therefor texture2DLod (GLSL function) has to be used in the vertex shader. I don't know the Cg equivalent but it doesn't look like you are passing any LOD argument to tex2D.

using glGetIntegerv read the value of GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, it should be non-zero if VTF is supported. If you are using the same texture also in pixel shader make sure GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is also non-zero.

Share on other sites
Well the code you link to runs fine on a GF 7600GS so it must be the ATI cards (have also experienced this on a X1960 Pro) that lacks the VTF functionality.

I have also tried enabling mipmapping in the opengl code but it does not help.

Share on other sites
Quote:
 Original post by mltWell the code you link to runs fine on a GF 7600GS so it must be the ATI cards (have also experienced this on a X1960 Pro) that lacks the VTF functionality.

The nVidia drivers may be more tolerant and assume mipmap level=0 when not specified but according to the GLSL specification, level of detail is not implicitly computed for vertex shaders thus you have to tell the GPU what level of detail to use.
Quote:
 Original post by mltI have also tried enabling mipmapping in the opengl code but it does not help.

And did you specified the LOD to use in the shader?
--

What values do you get for GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS ?

Share on other sites
The below code return 16

GLint paramus[4];
glGetIntegerv( GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, paramus);
std::cout << "paramus = " << *paramus << std::endl;

Share on other sites
Quote:
 Original post by mltHow do I read those values? I have tried:GLint* params;glGetIntegerv( GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, params);std::cout << "params = " << params << std::endl;It compiles ok, but the program crashes.

glGetIntegerv() expects a pointer to an integer variable.

Share on other sites
I found out :-) Both:

GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS
GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS

returns 16 so I guess the problem is calling the function:

float h = tex2D(height_texture, IN.uv);

in the vertex program on a Radeon card. Is there some subset of the Cg functions that has another name for ATI cards or does no exists?

Share on other sites
Another thing the profile that is loaded when I run the application is:

arbvp1 and arbfp1. Maybe these profiles does not support the functions?

EDIT: it seems to be this function:

cgGLSetOptimalOptions(vertex_profile())

that determines which profile is used. But its a Cg specific function so I don't think its possible to get another profile than arbvp1 for the vertex program.

Share on other sites
What format and filtering are you using for the height-map texture? Have you tried GL_RGBA_FLOAT32_ATI with nearest filtering (as suggested here) ?

Share on other sites
I have tried the following formats for the texture:

GL_LUMINANCE32F_ARB
GL_RGBA
GL_LUMINANCE_FLOAT32_ATI

and now

GL_RGBA_FLOAT32_ATI

They all compile but still the call:

float h = tex2D(height_texture, IN.uv);

in the vertex program makes the application crash. I don't know what you mean with filtering I don't think I currently use any.

1. 1
2. 2
3. 3
Rutin
20
4. 4
5. 5
khawk
14

• 9
• 11
• 11
• 23
• 12
• Forum Statistics

• Total Topics
633655
• Total Posts
3013186
×