The syntax between HLSL and GLSL is almost the same. Replace float4 with vec4 and you're most of the way there.
I read in a valve paper that in hl2 and many other games they used what they call an 'ambient cube' to acheive somewhat global-illumination effects on animated models (for static geometry they used radiosity light mapping). It looked really good, and I'd like to see it in action. Problem is, I'm on a linux computer, and the article used directx with hlsl shaders.
You can implement an ambient cube either with a 1px cube-map, which you sample using the world-space normal as a texture-coordinate, or you can pass 6 colour values into the shader and add them, weighted by the normal:vec3 colorXP = ...;// +x color vec3 colorXN = ...;// -x color vec3 colorYP = ...;// +y color vec3 colorYN = ...;// -y color vec3 colorZP = ...;// +z color vec3 colorZN = ...;// -z color vec3 ambient = colorXP * clamp( normal.x, 0.0,1.0) + colorXN * clamp(-normal.x, 0.0,1.0) + colorYP * clamp( normal.y, 0.0,1.0) + colorYN * clamp(-normal.y, 0.0,1.0) + colorZP * clamp( normal.z, 0.0,1.0) + colorZN * clamp(-normal.z, 0.0,1.0);Convert the tangent-space normal map into a world-space normal, then reflect the eye-direction around this normal to get the reflection direction. Use the reflection direction as a texture-coordinate to sample the cube map.
Also, they used a specular cube map for rendering of specular highlights. I don't quite understand this one- how do they combine the specular cubemap with the normal map?No, the cube-maps aren't dynamic in HL2 (though they can be implemented as to be dynamic).
Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically?After calculating the surface normal (which may involve reading from a normal-map or not), they use the normal to determine the ambient colour in that direction, as above.
And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do?
So normal mapping is done first, to determine the actual normal of a pixel.
After that, you can use the normal to find the ambient colour in that direction, and you can also use the normal to calculate the diffuse lighting (using phong/etc for models, or lightmaps for the world), and if required, you can reflect the eye-direction around the normal to calculate a specular reflection direction.The lightmapping is quite simmilar to traditional lightmapping, but instead of baking out a single lightmap result, they produce 3 lightmaps for the world.
It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is.
All the lightmaps are generated without normal-mapping -- only the geometric normals are used.
The first lightmap is generated as if all of the normals were bent slightly in a certain direction (say, slightly north).
The 2nd lightmap is generated as if all of the normals were bent slightly in a different direction (say, slightly south-east).
The 3rd lightmap is generated as if all of the normals were bent slightly in another different direction (say, slightly south-west).
When rendering the world-geometry at runtime, all 3 light-maps are read from, and are mixed together with different weights, which are determined by the normal map.
e.g. If the normal-mapped normal is pointing slightly north, then more weight will be given to the 1st lightmap, or if it's pointing slightly south, more weighting will be given to the 2nd/3rd lightmaps, etc....
Wow that cleared up everything. Thanks for the explanation!
Also, I must say I'm astounded by the ingeniousness here...