GLSL : packing normals into 2 bytes

Started by
3 comments, last by n00body 14 years, 6 months ago
Hey, I have a deferred lighting question. My lighting looks kinda ugly. I know I need to pack my normal information into 2 channels of my RGBA texture to increase precision. I've scoured the internet for how to do this but have so far been unsuccessful. I've tried (packing): vec3 normal = normal_info; gl_FragData[1] = (normal.x, normal.x * 256.0 - floor(normal.x * 256.0), normal.y, normal.y * 256.0 - floor(normal.y * 256.0)); Unpacking: normal.x = normaltex.x + normaltex.y * 256.0; normal.y = normaltex.z + normaltex.a * 256.0; This doesn't work and it fact looks worse. What is the proper way to do this without bitwise operations?
Advertisement
There's some good information about normal packing here.

Regards
elFarto
Thanks, Farto. Thats an awesome link but I guess my question is actually more general.

How do you put the first 2 bytes of any float, into two channels of a texture?
Nevermind. I just changed the texture to RGBA16 and it fixed it!
I've linked a site that has shader code for (un)packing a float to a vec4. It can easily be modified to handle vec3 and vec2 packing as well.

http://www.ozone3d.net/blogs/lab/20080604/glsl-float-to-rgba8-encoder/

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

This topic is closed to new replies.

Advertisement