Jump to content
  • Advertisement
Sign in to follow this  
spek

Reflective surfaces...

This topic is 4876 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, A couple of questions: I was wondering how Halflife2 renders some of its reflective objects. Several surfaces like wood walls,floors or metal benches are reflecting the surrounding area. I would say it uses a nearby cubemap for that. But instead of just reflecting everything, it seems only reflecting bright objects like a window or lightSource. Dark objects aren't reflected (or just a little, that might depend on the reflectivity of the surface). Instead of that, you just see the decal-map pixels. On top of that, some of those surfaces are using bump-maps as well. Anyone an idea how such a shader would look like? This is my guess, bit its probably incomplete or just totally wrong. As you can see, several problems will show up - how to calculate the intensity for a color - how to combine environment mapping with bump mapping - how to mix all colors for a nice result Pseudo fragment shader
/* Obtain reflection Color, bump mapped normal and decal Color	*/
float3  normal		= 2 * tex2D( bumpMap, uv ) + 1;
	< do something so that the perturbed normal can be used for the
	< reflect function below >
float3	reflectCol	= texCube( envMap,reflect( eyeV, normal ) );

float3	decalCol	= tex2D( decalMap, uv );

/* Calculate intensity/brightness of the reflectCol	*/
// This code is probably wrong. A pure red color (255,0,0) would give a 
// brightness of 0 while this is quit an intensive color I think.
float	brightness= (reflectCol.r + reflectCol.g + reflectCol.b) / 3;

/* Calculate the result	*/
	color	= lerp( decalCol, reflectCol, brightness * surfReflectivity ) 
                  * dot( l, normal );
Greetings, Rick

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by spek
But instead of just reflecting everything, it seems only reflecting bright objects like a window or lightSource. Dark objects aren't reflected (or just a little, that might depend on the reflectivity of the surface).


thats probably some HDR going on there.

lights/windows are orders of magniture more luminous than other objects. if only a fraction of incoming light is reflected, only the very bright reflected areas will still show up in the relection against the diffuse light also coming from the object in question.

the trick is to only convert the physical correct luminance values to the subjective interpretation of that named color as a very last postprocessing stage (so certainly not in cubemaps), and effects like this will appear automagicly.

Share this post


Link to post
Share on other sites
Thanks for the reply! But I'm afraid I couldn't understand the last thing you mentioned. I know Halflife2 is doing something with HDR recently, but I thought the original version didn't use that yet so maybe the shaders they use for that aren't that complicated.

I'f I'm right, you could calculate the luminance for every pixel (from a cubemap or something else) by doing this:
luminance = red*0.56 + green*0.33 + blue*0.11

With that information, isn't it possible to mix diffuse and reflected color based on this luminance:
color = lerp( decalMap * diffuseColor, reflectedColor, luminance );
So, luminance 0 means 100% decalMap colors in this case.

Now only the bright stuff should be reflected, although its probably not really physically correct. Does the amount of reflectedColor in the end-result also depend on the camera / lightVector?

Greetings (en dank u :) ),
Rick

Share this post


Link to post
Share on other sites
I kind of doubt something like a window casting light on the floor is done in real time, it was probably precomputed. Not that it isn't possible to do it real time, it would just be a waste because a window is a static object.

Share this post


Link to post
Share on other sites
I believe the way HL2 does "specularity" is in fact by cube maps. No HDR needed. If they only want brighter objects to appear in the cube maps, it's easy to preprocess them to get that effect. Not that HDR is a bad thing, but the original HL2 doesn't use it. Anyway the reflected color depends on the camera location through the reflection vector computation.

Where you say "do something so that the perturbed normal can be used for the reflect function below" you would probably transform the normal into world space which is the most (probably only) convenient space to interpret your cube maps. To do that you need the tangent space matrix which can be created from the tangent, bitangent, and normal vectors. Hmm, I guess that works.

You don't necessarily need to mix or lerp between the reflected color and diffuse color; it would be more common to just add them. Physically it makes sense; some light from the environment is reflected in a diffuse way, some is reflected in a specular way, the sum is what gets to your eye. Of course, diffuse and specular are just approximations of different levels of surface roughness...

If you did want to calculate the perceived brightness or luminance of something you might want to use a weighted average instead of just a plain average...the eye is more sensitive to intensity variations in green wavelengths. See Charles Poynton's color FAQ for more than you ever wanted to know about this topic:

http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC3

Share this post


Link to post
Share on other sites
Thanks! In that case, my first suggestion wasn't that bad after all. I found a way to calculate luminance so the only thing left is that tangent space matrix. I checked some demo's but they all did on a different way. One showed with tangents but it was Assembler code so I couldn't translate everything. Anyway should it be something like this:

float3x3 tangentSpace = ( tangent.xyz, biNormal.xyz, normal.xyz )

normal = 2 * normalMap(uv) -1;
normal = mul( tangentSpace, normal );

Probably there's more because what I could read from that assembler stuff, the Model_View matrix is used as well. Maybe someone could translate this stuff into Cg/GLSL or something:

# multiply tangent space matrix by current modelview matrix
MUL tempMatrix0, mv[0].y, tangSpaceMat1;
MAD tempMatrix0, mv[0].x, tangSpaceMat0, tempMatrix0;
MAD tempMatrix0, mv[0].z, tangSpaceMat2, tempMatrix0;
MAD tempMatrix0, mv[0].w, tangSpaceMat3, tempMatrix0;

Not sure but
tangSpaceMat0 = ( normal.x, tangent.x, binormal.x, ? )
tangSpaceMat1 = ( normal.y, tangent.y, binormal.y, ? )
tangSpaceMat2 = ( normal.z, tangent.z, binormal.z, ? )
tangSpaceMat3 = ( ?, ?, ?, ? )

Thanks!
Rick

Share this post


Link to post
Share on other sites
Would you need a HDR cubemap normally? I know that sometimes you can use the alpha in a cubemap for something fancy, but I think it would usually be unused. So couldn't you scale every pixel into 0-1, then store the scaling in alpha? Wouldn't be quite as accurate but I'm sure it wouldn't look so different. The only drawbacks seem to be the preprocessing and one extra pixel shader instruction.

Cheers, I'm probably wrong...

Share this post


Link to post
Share on other sites
Wow, I can't believe they're showing some 'secret ingredients' of their game. Really good for learning these papers, thanks!!


Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!