• Advertisement
Sign in to follow this  

How to pass GL_BYTE to vertex shader?

This topic is 1463 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

If I have an array of 3 bytes per vertex - which I'm padding to 4 bytes to keep each vertex data in a 4-byte word - and pass it down to the vertex shader as a GL_UNSIGNED_BYTE data type, how does it appear in the shader? Does it get automatically converted into 4 floats? In other words, can I use the data in the shader as a vec4?

 

Also, will the float be normalize between 0. and 1., or will it be converted into a number from 0. to 255.?

 

If anybody has a link with a reference on how these non-integer, non-float data are passed down to the vertex shader I would appreciate it.

 

Thanks.

Share this post


Link to post
Share on other sites
Advertisement

Thanks. I guess all I need to do is to normalize the attributes and they will show up as 4 floats in the range [0.,1.].

Share this post


Link to post
Share on other sites

To answer this question, we'd have to look at glVertexAttribPointer's prototype:

void glVertexAttribPointer(GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid * pointer);

 

The size and type parameters will tell the shader the datatype, and how many elements of that same datatype will be aligned in memory from address your provide in the pointer parameter. The normalized parameter specifies whether the datatype should be passed in using its current value if set to GL_FALSE, or whether it should be scaled down to its datatype's max constraints (yielding a value between 0.0 and 1.0) with GL_TRUE.

 

In this case, you're sending 4 "unsigned char" datatypes, and correctly specifying them as GL_UNSIGNED_BYTE. Setting normalized to GL_FALSE, they should remain as their current whole numbers (from 0 to 255). Setting normalize to GL_TRUE should scale each color component by dividing by 127.0f.

 

That's at least been my experience. I hope this helps!

Edited by Vincent_M

Share this post


Link to post
Share on other sites

Vincent,

 

Thanks! It does help. But I do have a few more questions:

 

If I'm passing the data as a "generic" Attribute and I will use it in the code as a normal vector, and it's being passed as GL_BYTE (i.e., as  SIGNED byte), will setting normalize to GL_TRUE divide by 127.0f or 255.0f? From what I understand, it will do the right thing if it's a SIGNED byte and divide it by 127. And if it is an attribute that will be used as a color and it's passed down as GL_UNSIGNED_BYTE, then it should be divided by 255. Right?

 

Thanks again.

Share this post


Link to post
Share on other sites

You are correct in both cases: GL_BYTE, should divide by 127.0f for your normal, and GL_UNSIGNED_BYTE should divide by 255.0f. Now, you may want to normalize your vertex normal once again in your shader for better accuracy if you're storing the data as an unsigned char per component if your lighting doesn't seem right. If you're doing per-fragment lighting, I have seen tutorials online suggesting that the normal should be re-normalized in the fragment shader since the normal interpolation will skew the fragment's incoming normal's magnitude.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement