Sign in to follow this  
immuner

GLSL vertex shader issue

Recommended Posts

Hi, i'm dealing with the following issue:i've written a shader which uses a 1d texture to offset the height of a terrain such as: vec4 height_map = texture1D(heightmap_sampler, gl_MultiTexCoord0.x); vec4 vertex = gl_Vertex; max_height = 0.0; vertex.y = max_height * height_map; . . . Result is: On a relatively new card (quadro :)) i get a drop of 15fps on a 60k triangle mesh (400fps - 385fps). On my other pc though (7800gt) i get 3fps!!!!!!! Is there an issue on this card with glsl texture fetching on vertex shaders?any workaround? p.s. is there any way to see which profile my card support?

Share this post


Link to post
Share on other sites
Vertex texture fetch is generally not very fast but 3fps sounds excessively slow. You should take a look at the shader info log that is generated on the slower card. Maybe that gives you a clue on why it is that slow.

And also make sure there are no gl errors in your application.

Share this post


Link to post
Share on other sites
What kind of texture format and filtering did you setup? Sounds like you are in software mode.

GL_RGBA_FLOAT32_ATI and maybe the RGB variant and only GL_NEAREST for filtering only for GF7&6 series I am not sure what is supported on GF8 series, but I am guessing filtering is allowed finally.

Share this post


Link to post
Share on other sites
this is my texture setup.
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glClear(GL_COLOR_BUFFER_BIT);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);

how can i be in software mode?

p.s. the shader info log says nothing. shaders are compiled, linked and validated fine :(.

Share this post


Link to post
Share on other sites
Can you post the entire shader code?

There seem to be some weird things going on in that code like:
vertex.y = max_height * height_map; <- storing an array in a float

Share this post


Link to post
Share on other sites
sure:

i noticed what you said and im just passing a float instead of a vec

the problem is that glsldevil doesnt do shader debugging on any pc ive tried with my app. apparently it doesnt like gldrawarrays

uniform mat4 world_matrix;
uniform sampler2D heightmap_sampler;
varying float height;
float max_height;

void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
vec4 height_map = texture2D(heightmap_sampler, gl_TexCoord[0].xy);

vec4 vertex = gl_Vertex;
max_height = 800.0;
vertex.y = max_height * height_map.r;
height = vertex.y;

gl_Position = gl_ModelViewProjectionMatrix * world_matrix * vertex;
}

EDIT: actually im generating the texture as GL_BGR format and GL_RGB internal format since im reading from a bmp. this shouldnt make a difference but im mentioning it anyways.

Share this post


Link to post
Share on other sites
You are in software mode from your code posting

here is a correct setup

GLuint vertex_texture;
glGenTextures(1, &vertex_texture);
glBindTexture(GL_TEXTURE_2D, vertex_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_NEAREST_MIPMAP_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_FLOAT32_ATI, width, height, 0,
GL_LUMINANCE, GL_FLOAT, data);




Vertex textures are bound using the standard texture calls, using the
GL_TEXTURE_2D texture targets. Currently only the
GL_LUMINANCE_FLOAT32_ATI and GL_RGBA_FLOAT32_ATI formats are
supported for vertex textures. These formats contain a single or four channels of
32-bit floating point data, respectively. Be aware that using other texture formats or
unsupported filtering modes may cause the driver to drop back to software vertex
processing, with a commensurate drop in interactive performance.

Share this post


Link to post
Share on other sites
ok i tried sending my texture as GL_LUMINANCE_FLOAT32_ATI but there was no difference in fps :(.

Ok im in software mode since im using this. so how can i be in hardware mode and how do i know the things i need to change? i havent found a good reference on that :(.

Share this post


Link to post
Share on other sites
Quote:
MARS_999
Currently only the GL_LUMINANCE_FLOAT32_ATI and GL_RGBA_FLOAT32_ATI formats are supported for vertex textures.


I might be misunderstanding, but I'd be very surprised if I couldn't use other formats in vertex textures (e.g. GL_RGBA8).

Also, is GL_LUMINANCE_FLOAT32_ATI the same as GL_LUMINANCE32F_ARB?

Share this post


Link to post
Share on other sites
i agree with that. i had a look at the SVT demo at the nvidia website but the shader is written in asm. anyways, i really dont get why it's so slow on my 7800gt card. sampling on a vertex shader 2k triangles is fine, but sampling 60k really knocks the perforfmance down a lot. i would really like to avoid doing that on cpu and transfer as much as i can regarding math calculating, samplings, etc on gpu (where currently im doing cpu heightmapping on my 7800gt and gpu on my quadro). maybe using cg instead would solve this problem i dont know.
Generally i would really like to find some kind of reference with technical differences and behaviours amongst several nvidia and ati cards so that solving this kind of problem would be easier.

Share this post


Link to post
Share on other sites
Quote:
Original post by sprite_hound
Quote:
MARS_999
Currently only the GL_LUMINANCE_FLOAT32_ATI and GL_RGBA_FLOAT32_ATI formats are supported for vertex textures.


I might be misunderstanding, but I'd be very surprised if I couldn't use other formats in vertex textures (e.g. GL_RGBA8).

Also, is GL_LUMINANCE_FLOAT32_ATI the same as GL_LUMINANCE32F_ARB?


Yes the are equivalent, I am not sure about GF8 series, but the GF6,7 series this was a limitation, and maybe the removed it with newer drivers, but I doubt it. Best case try it and see what you come up with, and use GLExpert to see what is going on or glDebugger.

Share this post


Link to post
Share on other sites
Quote:
Original post by immuner
ok i tried sending my texture as GL_LUMINANCE_FLOAT32_ATI but there was no difference in fps :(.

Ok im in software mode since im using this. so how can i be in hardware mode and how do i know the things i need to change? i havent found a good reference on that :(.


Did you disable filtering too ?

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by sprite_hound
Quote:
MARS_999
Currently only the GL_LUMINANCE_FLOAT32_ATI and GL_RGBA_FLOAT32_ATI formats are supported for vertex textures.


I might be misunderstanding, but I'd be very surprised if I couldn't use other formats in vertex textures (e.g. GL_RGBA8).

Also, is GL_LUMINANCE_FLOAT32_ATI the same as GL_LUMINANCE32F_ARB?


If you use some other format than that recommended by nVIdia, then you get software vertex processing. http://developer.nvidia.com has a document called Vertex_Textures.pdf or something like that. It said that 2 formats are supported RGBAF32 and LUMINANCEF32. You can have mipmaps but filter mode must be nearest so your code looks OK. Texture must be 2D.
In the vertex shader, use texture2DLod(tex, coord, 0) or whatever mipmap you want.

GL_LUMINANCE_FLOAT32_ATI is part of the old GL_ATI_texture_float. Don't use this anymore.
Use GL_ARB_texture_float: GL_LUMINANCE32F_ARB or GL_RGBA32F_ARB
Documentation http://www.opengl.org/registry/specs/ARB/texture_float.txt

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this