[SOLVED] Creating and using IBL

Started by
7 comments, last by n00body 14 years ago
Introduction: Currently, I am trying to add Image-Based Lighting to my asset and rendering pipeline. So far, I think I understand the basic concept behind generating IBL data:
  1. Take some kind of input image data (cubemap, spheremap, etc)
  2. Derive normals for each input pixel.
  3. For a given output pixel, dot its normal with the normals of all input pixels to get their weights.
  4. Add weighted input pixel to total color, add weight to total intensity.
  5. When done, divide total color by total intensity.
  6. Output result to storage method (cubemap, spheremap, SH, etc).
However, there are still some things I don't quite get. Generation:
  1. What is the "solid angle", and how is it relevant? How do I derive it?
  2. In the case of convolving a cubemap/spheremap, how do I derive the normals for each pixel?
  3. How do I generate convolved image data for different specular powers?
Usage: My intitial tests have shown massive visual differences when using specular IBL. Now, models receive specular lighting across their whole surface, rather than having isolated little highlights. So, the specular lighting ends up dominating the diffuse lighting.
  1. How should I change the way I handle specular IBL to make it look good?
  2. Should I modify my specular color/intensity textures?
  3. Should I start treating specular IBL like I would reflections?
Thanks for any help you can provide. EDIT: Removed my question about normalizing values. Adjusted my question about getting specular powers. [Edited by - n00body on April 5, 2010 11:22:32 AM]

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Advertisement
First of all, you should think of your light-source image as an environment map, such as you might use for reflection mapping. IBL is, after all, just a diffused version of a reflection map, and a reflection map is functionally the same as a dense cloud of point lights.

Quote:
What is the "solid angle", and how is it relevant? How do I derive it?


The 'solid angle' is a measure of the area of any region or feature in your environment map, after it's transformed into spherical space. It's usually measured in steradians (sr), which are dimensionless, like degrees. Some examples: the solid angle of a whole sphere is 4pi sr, a hemisphere such as the visible sky is 2pi sr, and the moon is about 0.00006 sr (about a hundred-thousandth of the sky).

If you're sampling your map by brute force or monte carlo integration, you don't really need to think much about this, as every sample will subtend the same infinitesimal solid angle. By the same token, you usually don't need to think about area when you're sampling a regular 2D texture.


Quote:
In the case of a cubemap/spheremap, how do I derive the normals for each pixel?


The same way as if you're sampling a reflection map - except that you don't need to consider the view vector or the reflection vector. You already have a 3D unit vector to sample a 2D environment map, right? That vector is the normal you're looking for.

Quote:If the basic approach generates data for a Phong exponent of 1, how do I generate data for different specular powers?


Nope - the basic approach generates data for a Lambertian shading model (or the Lambertian term of a Phong model), which is view-independent, which is why it's possible to bake image-based diffuse lighting in the first place. The Phong specular term is always view-dependent, and that's not what you're baking.

For very high powers (eg:100), the original environment map (sampled via a reflection vector) is all the data you need for the specular term. For lower powers, you can just supersample the environment map as though you were raytracing 'glossy reflections' - jittering the reflection vector - but without testing rays against geometry. You could treat each sample as a Phong (specular) light-source, or you could use a ready-made poisson distribution of sample offsets and weights.

If realtime performance is critical, you can simply apply a gaussian blur to your environment map (in spherical space, if you can) and then take fewer samples - or even a single sample of a heavily-blurred map.

There's no physically-correct way of doing this. Phong is, at best, a gross approximation of the real effect. What you'll find, though, is that there's a whole continuum of strategies between reflection mapping and diffuse IBL, that differ only in how & when samples are integrated.

Quote:How do I normalize the final values to the range [0,1] to minimize banding in low-precision storage? Do I just find the brightest value and divide every other value by it?


To avoid having to scale or clamp your values, given that you're probably not using your storage format's alpha channel, you could store a multiplier or exponent in each texel's alpha, such as in the Radiance RGBE format. But really, unless you're targeting old-fashioned hardware or software libraries, you're better off using a higher-precision storage format like half-float OpenEXR. This is, after all, the year 2000 or something.

Quote: So, the specular lighting ends up dominating the diffuse lighting.
How should I change the way I handle specular IBL to make it look good?


For anything metallic, you only really need specular lighting. Tint it if you're rendering a coloured metal like brass or copper. Supersampling is your best bet if you need effects like anisotropic reflections, like for brushed or lathed metal.

For anything smooth and non-metallic (plastic, marble, skin etc.), you need a diffuse sample and a specular sample, and then you blend between them with Fresnel's equation for reflection. This will realistically accentuate reflections at glancing angles. For transparent or translucent materials, replace the diffuse term with the refraction/translucency term. By all means tint & texture the diffuse sample, but don't tint the specular sample - leave its chrominance unmodified - just mask it where necessary.

That's enough to cover 99% of all the materials you see around you. For realism, trust the Fresnel equation but keep its IOR between about 1.3 and 1.9, and whenever you're estimating a diffuse value or colour, halve it. For example the 'whites' of your eyes are actually 50% grey, but they exhibit full Fresnel reflection with an IOR of 1.33.
Maybe I should have mentioned a few more details about the implementation I am considering, and how I plan to use it.

I am considering using an implementation similar to CryEngine3's where I store convolved images for different specular powers in the mipmaps of a cubemap. This way I can interpolate between them during a look-up, and avoid having to store every specular power. As I understand it, I could use the world-normal to sample the specular exponent 1 image for diffuse lighting. Then, specular would be sampled via a standard world-space reflection vector, using the specular power to look-up the correct mipmap lod.

As with their implementation, I will be using these IBL cubemaps as a type of light in my Deferred Lighting renderer. So the specular reflections gets stored together in the specular lighting buffer. This is how I found out that my test model's specular color maps weren't quite right for this data, since it is no longer a bunch of little highlights.

@Helicobster:
Thanks for the information, but I still need some clarification.

When I asked about finding the normal, I meant from the XY pixel coordinate in a spheremap, or the face + XY of a cubemap. I am asking for the sake of generating an IBL cubemap in software.

As for the "Phong exponent of 1" part, maybe I should have said that I had heard it was effectively equivalent to Lambertian. So I had heard it could be used for diffuse lighting.

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

If you haven't checked out ATI's cubemap gen already, please do.
I think there are papers and stuff about their algorithms / implementations there as well.

My 2c.
Quote:...my test model's specular color maps weren't quite right for this data, since it is no longer a bunch of little highlights.


What do you mean by that? The diffuse lighting maps should've been generated from the same image as the specular reflection map(s).

Or are you talking about a specular colour map that's mapped directly onto the surface, like a partly-shiny decal?


Quote:When I asked about finding the normal, I meant from the XY pixel coordinate in a spheremap, or the face + XY of a cubemap. I am asking for the sake of generating an IBL cubemap in software.


Oh, right. You want the inverse of the transform that turns a 3D vector into a 2D environment map coordinate.

Typically, for getting a 3D unit vector from spherical coordinates...
X = cos(U * pi) * sin(V * pi)
Y = sin(U * pi) * sin(V * pi)
Z = cos(V * pi)
...but you might want to change some signs or add 0.5 or swap Y with Z, depending on your implementation.

To map the texels of a cube map into unit vectors, you only have to remember that it's a cube, and so it will map linearly onto a unit-sized box, using only basic arithmetic - no sqrts or sins or atans necessary. You can normalise the vectors later.

Actually, generating a cube-map should be easy. Just render some geometry etc. from each of the six cardinal directions and store the result in six regions of one map.

Quote:
As for the "Phong exponent of 1" part, maybe I should have said that I had heard it was effectively equivalent to Lambertian. So I had heard it could be used for diffuse lighting.


It's 'effectively equivalent' to Lambertian the same way as carob-based chocolate substitute is 'effectively equivalent' to chocolate. Superficially similar, but kinda horrid and generally regrettable.

And that's assuming you have a reflection vector, which you'd still need for exponent-1 Phong speculars. When baking view-independent lighting, you don't have a view vector at all, so you'd have to fall back on either a substitute vector or a different shading model.

Instead, you could just store the Lambertian diffuse in a low-res level of your MIP-map, and skip all the Phong exponents below 20-ish, storing only - say - Phong-100, Phong-50, Phong-25 and Lambert.

For non-metal materials, you'd nearly always sample the Lambertian, and optionally one or more specular levels. Interpolating between any two or more MIP-map levels should look pretty good.

Phong highlights with exponents below 20-ish are good for nothing anyway. In the real world, diffuse and specular really are separate effects - specular reflection happens immediately at the surface, diffuse reflection happens below the surface - and there isn't really anything in-between.



For generating MIP-map levels of intermediate specular powers, plain old blurring is probably your best bet. Repeatedly blurring an image with a simple 3x3 kernel can approximate a gaussian blur, and for simplicity, each level in your MIP-map can be binned to half size and then blurred from the last level. It won't look exactly like Phong shading, but Phong shading is nothing to aspire to.
Sorry for the late reply.

@eq:
I knew about the tool, but I didn't know about the papers. I will look into that when I get the chance. Thanks for the help.

@Helicobster:
Quote:Original post by Helicobster
Or are you talking about a specular colour map that's mapped directly onto the surface, like a partly-shiny decal?

Yes, I was referring to the specular color map on the object's surface. After further observation, I am thinking that extensive use of IBL in my pipeline will force me to treat specular more like reflections. This just means that artists using my tech will have to change the way they author specular color maps to accommodate this difference.

Quote:Original post by Helicobster
It's 'effectively equivalent' to Lambertian the same way as carob-based chocolate substitute is 'effectively equivalent' to chocolate.

Apparently I misread that part of the document I have been studying. According to it, "the Lambert diffuse lobe has the same shape as the normalized Phong specular lobe of exponent 1". So they were talking about Normalized Phong, which is apparently a whole different animal than regular Phong. Yet another thing to research.

Thanks for all the help guys, I think that just about wraps up my questions.

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Quote: So they were talking about Normalized Phong, which is apparently a whole different animal than regular Phong. Yet another thing to research.


I don't think they meant "normalised Phong specular lobe", but rather "a normalised specular lobe" from the Phong model.

While they are the same shape, they point in different directions. Specular intensity uses dot(reflection, lighting) and diffuse uses dot(normal, lighting). So while some of the methods pertaining to low specular powers may be interchangeable with methods pertaining to diffuse shading, when rendering, you can't really substitute one for the other, because they require different inputs and will look substantially different.
More resources:
Ramamoorthi's page on irradience environment maps
HDRShop, which can do irradience map generation/manipulation (useful to test your algorithms against a reference)
Thanks for the tip. I'll be sure to investigate it.

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

This topic is closed to new replies.

Advertisement