Sign in to follow this  
rewolfer

GLSL vert shader sampler issues

Recommended Posts

ok ok so basically im trying to use a vertex shader to transform a plane of vertices according to a heightmap that i have in an SDL_Surface (or an array of floats) i was using a usual rgba format with 4 ubytes for the texture, and passing that in to the shader's sampler uniform. (i was just using the alpha channel for now) but alas 0.05 fps or something haha... so after googling i realised it was using SW rendering because most cards dont actually really support sampling in vertex shaders. Great Failure. anyways it turns out that nvidia cards do support it, but only for formats GL_LUMINANCE_ATI and GL_RGBA_FLOAT32_ATI.... awesomeness, the latter is the one i planned to use.. so... what does that actually mean? what should the format of the block of pixels be that i pass in to glTexImage2D ? an array of 4 8bit floats ???? crazy... or an array of 4 32bit floats per pixel??? massive... I couldn't seem to find any clear specs on this format. So any clarification would be much appreciated regarding the format. on the other hand, what is a luminance format? is that better suited for my needs? and on the other hand (yes i have 3), is there maybe a way to just access a float array instead of passing in a texture? MUCH THANKS IN ADVANCE!!! do have a good day :) oh and please respond quickly :P proj is due monday n still got much to do haha

Share this post


Link to post
Share on other sites
luminance is a single channel, which is probably what you're after.

I can't find anything on GL_LUMINNACE_ATI through google, so it's not obvious how many bytes per pixel the driver would expect.

Share this post


Link to post
Share on other sites
thanks bastard guy :P
hmm so i tried

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_FLOAT32_ATI, 512,512, 0, GL_LUMINANCE, GL_FLOAT, pixs);

where pixs is a float array of 512x512 elements. but everything is still uber slow unfortunately.... i was pretty sure that the format expects 32bit elements. but watever it takes, it hasnt worked now... did i do anything wrong?

and could u think of any alternatives to the texture approach? can glsl take uniform float arrays?

Share this post


Link to post
Share on other sites
OK GREAT SUCCESS AHHHHHHHHHH
I am happy beyond words - seriously... i shouted so loudly.

k cool - thanks to everyone for the help. i used the luminance type and passed an array of floats with my data in. but i found its also necessary to use NEAREST filtering. V-man's link was a great help.
I assume that GL_RGBA32F will work successfully too, if u can work out how to organise each pixel. From what i've read it seemed (surprisingly) that its 32bit each for r,g,b,a - but im probably wrong. lol.
Aaaaaanyways, this is wat i used to set it up:

glBindTexture(GL_TEXTURE_2D, texID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, 512,512, 0, GL_LUMINANCE, GL_FLOAT, pixs);
// where pixs is a pointer to 512x512 floats

change one thing and ur down to < 1 fps

thanks again guys

Share this post


Link to post
Share on other sites
GL_LUMINANCE32F_ARB and GL_RGBA32F_ARB are floating point internal formats. Each component is 32 bit.
There is no such thing as 8 bit float. The lowest GPUs support is 16 bit float which is GL_LUMINANCE16F_ARB but that kind of texture is not supported in the vertex units.
The highest GPUs support is 32 bit float textures until now.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this