Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Valve techniques in OpenGL


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 keelx   Members   -  Reputation: 104

Like
0Likes
Like

Posted 06 December 2011 - 06:45 PM

I read in a valve paper that in hl2 and many other games they used what they call an 'ambient cube' to acheive somewhat global-illumination effects on animated models (for static geometry they used radiosity light mapping). It looked really good, and I'd like to see it in action. Problem is, I'm on a linux computer, and the article used directx with hlsl shaders. Also, they used a specular cube map for rendering of specular highlights. I don't quite understand this one- how do they combine the specular cubemap with the normal map? Also, it wasn't quite clear in the paper- do they use specular cubemapping only with world geometry? Or do they use it for models as well? If not, what do they use for models? (Phong, I would assume)

I looked up these techniques, and found little to no resources covering the topics. At least not in OpenGL.


In short, two valve techniques I'd like to see how to do in OpenGL are ambient cubemapping, and specular cubemapping. If anyone has any resources, code samples, glsl shaders, better alternatives, that would be very helpful. Also, keep in mind I am on an intel 950gma. But that isn't really a problem (I don't think) for these effects.

Sponsor:

#2 dpadam450   Members   -  Reputation: 944

Like
0Likes
Like

Posted 06 December 2011 - 07:03 PM

Well cube map implies a texture that is 6 faces. So specular is the shininess factor, so if there is a streetlight to the left of the character that is red and a light to the right that is green, the specular would reflect the environment. Same thing with ambient. They might use a stencil cube map for determining visiblity from the sky. So if you are in a narrow path, then only a small part of sky will be visible in the top texture of the cubemap. Which would mean that only normals pointing directly up would receive any ambient light. Maybe you could post the paper.

#3 MJP   Moderators   -  Reputation: 11736

Like
0Likes
Like

Posted 06 December 2011 - 07:22 PM

The HLSL code should be translatable to GLSL, they don't do anything fancy. For the ambient cube, they just do a weighted lookup based on the normal. I would recommend just going with spherical harmonics instead, which is a more general solution that can provide better quality. For the specular cubemaps, they just compute a reflection vector per pixel based on the normal. You can do this easily with the reflect() intrinsic. To combine this with normal mapping, you just compute the reflection using the normal map normal rather than the interpolated vertex normal. They use the specular cubemaps for dynamic objects, and also have the option of adding analytical specular from a small number of dynamic light sources.

BTW the paper is here: http://www.valvesoftware.com/publications/2006/SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf

#4 keelx   Members   -  Reputation: 104

Like
0Likes
Like

Posted 06 December 2011 - 10:30 PM

The HLSL code should be translatable to GLSL, they don't do anything fancy. For the ambient cube, they just do a weighted lookup based on the normal. I would recommend just going with spherical harmonics instead, which is a more general solution that can provide better quality. For the specular cubemaps, they just compute a reflection vector per pixel based on the normal. You can do this easily with the reflect() intrinsic. To combine this with normal mapping, you just compute the reflection using the normal map normal rather than the interpolated vertex normal. They use the specular cubemaps for dynamic objects, and also have the option of adding analytical specular from a small number of dynamic light sources.

BTW the paper is here: http://www.valvesoft...ourceEngine.pdf


I'll look into spherical harmonics; I've heard of them before.


Reading this and another paper (can't find it, title is 'directX tutorial 10, half-life 2 shading'), I still don't quite get the concepts. It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is. Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically? And if they are to be normal mapped, then wouldn't the cube map have to be precomputed separately for every surface that has a specular term, using the normals of the surface (provided by a normal map)?

And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do? Then there's the specular cube map. The models apparently use the same cube maps as the world, and my questions about the world lighting still remain here.

So, in all, I don't understand really anything of what's going on here. If someone could explain this a bit more clearly and in-depth than the paper, that would be great. Preferably in somewhat of a step-by-step fashion.

#5 Hodgman   Moderators   -  Reputation: 31786

Like
0Likes
Like

Posted 06 December 2011 - 10:45 PM

I read in a valve paper that in hl2 and many other games they used what they call an 'ambient cube' to acheive somewhat global-illumination effects on animated models (for static geometry they used radiosity light mapping). It looked really good, and I'd like to see it in action. Problem is, I'm on a linux computer, and the article used directx with hlsl shaders.

The syntax between HLSL and GLSL is almost the same. Replace float4 with vec4 and you're most of the way there.

You can implement an ambient cube either with a 1px cube-map, which you sample using the world-space normal as a texture-coordinate, or you can pass 6 colour values into the shader and add them, weighted by the normal:
vec3 colorXP = ...;// +x color
vec3 colorXN = ...;// -x color
vec3 colorYP = ...;// +y color
vec3 colorYN = ...;// -y color
vec3 colorZP = ...;// +z color
vec3 colorZN = ...;// -z color
vec3 ambient = colorXP * clamp( normal_ws.x, 0.0,1.0)
             + colorXN * clamp(-normal_ws.x, 0.0,1.0)
             + colorYP * clamp( normal_ws.y, 0.0,1.0)
             + colorYN * clamp(-normal_ws.y, 0.0,1.0)
             + colorZP * clamp( normal_ws.z, 0.0,1.0)
             + colorZN * clamp(-normal_ws.z, 0.0,1.0);

Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically?

No, the cube-maps aren't dynamic in HL2 (though they could be implemented as to be dynamic).

And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do?

After calculating the surface normal (which may involve reading from a normal-map or not), they use the normal to determine the ambient colour in that direction, as above.

So normal mapping is done first, to determine the actual normal of a pixel.
After that, you can use the normal to find the ambient colour in that direction, and you can also use the normal to calculate the diffuse lighting (using phong/etc for models, or lightmaps for the world), and if required, you can reflect the eye-direction around the normal to calculate a specular reflection direction.

Also, they used a specular cube map for rendering of specular highlights. I don't quite understand this one- how do they combine the specular cubemap with the normal map?

Convert the tangent-space normal map into a world-space normal, then reflect the eye-direction around this normal to get the reflection direction. Use the reflection direction as a texture-coordinate to sample the cube map.

It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is.

The lightmapping is quite simmilar to traditional lightmapping, but instead of baking out a single lightmap result, they produce 3 lightmaps for the world.
All the lightmaps are generated without normal-mapping -- only the geometric normals are used.
The first lightmap is generated as if all of the normals were bent slightly in a certain direction (say, slightly north).
The 2nd lightmap is generated as if all of the normals were bent slightly in a different direction (say, slightly south-east).
The 3rd lightmap is generated as if all of the normals were bent slightly in another different direction (say, slightly south-west).

When rendering the world-geometry at runtime, all 3 light-maps are read from, and are mixed together with different weights, which are determined by the normal map.
e.g. If the normal-mapped normal is pointing slightly north, then more weight will be given to the 1st lightmap, or if it's pointing slightly south, more weighting will be given to the 2nd/3rd lightmaps, etc....

#6 keelx   Members   -  Reputation: 104

Like
1Likes
Like

Posted 06 December 2011 - 10:51 PM

I read in a valve paper that in hl2 and many other games they used what they call an 'ambient cube' to acheive somewhat global-illumination effects on animated models (for static geometry they used radiosity light mapping). It looked really good, and I'd like to see it in action. Problem is, I'm on a linux computer, and the article used directx with hlsl shaders.

The syntax between HLSL and GLSL is almost the same. Replace float4 with vec4 and you're most of the way there.

You can implement an ambient cube either with a 1px cube-map, which you sample using the world-space normal as a texture-coordinate, or you can pass 6 colour values into the shader and add them, weighted by the normal:
vec3 colorXP = ...;// +x color
vec3 colorXN = ...;// -x color
vec3 colorYP = ...;// +y color
vec3 colorYN = ...;// -y color
vec3 colorZP = ...;// +z color
vec3 colorZN = ...;// -z color
vec3 ambient = colorXP * clamp( normal.x, 0.0,1.0)
     		+ colorXN * clamp(-normal.x, 0.0,1.0)
     		+ colorYP * clamp( normal.y, 0.0,1.0)
     		+ colorYN * clamp(-normal.y, 0.0,1.0)
     		+ colorZP * clamp( normal.z, 0.0,1.0)
     		+ colorZN * clamp(-normal.z, 0.0,1.0);

Also, they used a specular cube map for rendering of specular highlights. I don't quite understand this one- how do they combine the specular cubemap with the normal map?

Convert the tangent-space normal map into a world-space normal, then reflect the eye-direction around this normal to get the reflection direction. Use the reflection direction as a texture-coordinate to sample the cube map.

Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically?

No, the cube-maps aren't dynamic in HL2 (though they can be implemented as to be dynamic).

And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do?

After calculating the surface normal (which may involve reading from a normal-map or not), they use the normal to determine the ambient colour in that direction, as above.

So normal mapping is done first, to determine the actual normal of a pixel.
After that, you can use the normal to find the ambient colour in that direction, and you can also use the normal to calculate the diffuse lighting (using phong/etc for models, or lightmaps for the world), and if required, you can reflect the eye-direction around the normal to calculate a specular reflection direction.

It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is.

The lightmapping is quite simmilar to traditional lightmapping, but instead of baking out a single lightmap result, they produce 3 lightmaps for the world.
All the lightmaps are generated without normal-mapping -- only the geometric normals are used.
The first lightmap is generated as if all of the normals were bent slightly in a certain direction (say, slightly north).

The 2nd lightmap is generated as if all of the normals were bent slightly in a different direction (say, slightly south-east).

The 3rd lightmap is generated as if all of the normals were bent slightly in another different direction (say, slightly south-west).

When rendering the world-geometry at runtime, all 3 light-maps are read from, and are mixed together with different weights, which are determined by the normal map.
e.g. If the normal-mapped normal is pointing slightly north, then more weight will be given to the 1st lightmap, or if it's pointing slightly south, more weighting will be given to the 2nd/3rd lightmaps, etc....


Wow that cleared up everything. Thanks for the explanation!

Also, I must say I'm astounded by the ingeniousness here...






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS