Compared with the specular one, the diffuse area-lighting is relative simple. It's view-independent
That depends if you're using Lambert BRDF or not :wink:
Does the linearly cosine transform base fit more complex function than uniform polygon? My be it can be used for environment map specular integration
Yeah, in the paper they extend it to include a good approximation for textured polygons.
However, they split the calculations between light intensity (which receives the Blinn-esque skewing at glancing angles) and light colour (which doesn't)... So unfortunately, while this does reproduce the shape of Blinn/GGX/etc BRDF's quite well in general, this accurate reproduction doesn't apply to the way that the texture-colours are blurred.
I have been reading about the various reflection techniques in the Frostbite paper (planar, SSR, local cube map, global cube map) and just when I think I have everything sorted out I see this:
...
So, to my eyes that looks just like some of the reflection techniques I am studying (especially distance-based roughness in the FB paper). Does the usage of area lights remove the need for the reflection techniques I mentioned above? Or are the authors just using these additional techniques and not mentioning it?
At the heart of the rendering equation is the BRDF. This is a function for each surface/pixel/etc, which defines how it reflects light.
Light sources emit light, light bumps into surfaces, the BRDF then says how much is absorbed, how much is reflected, what direction(s) it's reflected, and what kind of colouration occurs.
Typically in games we make our BRDFs by glueing together the Lambert diffuse BRDF and the Blinn-Phong specular BRDF, to create a function that captures a range of surfaces from rough to smooth, and dull to shiny.
When we calculate the reflections that are caused by photos that have directly come from a light source (light->surface->eye), we call it direct lighting. For these, we calculate the irradiance/illuminance - the amount of light arriving at the surface from the light source -- plug that value into the BRDF along with the eye direction, and we calculate the radiance towards the eye.
We can do direct lighting for point lights, spot lights, directional lights, etc -- and recently good techniques have been invented for computing direct lighting from area lights quickly too.
With only direct lighting calculations, shadows are too dark and bumps in a surface have too much contrast. Everything looks very fake.
Traditionally in realtime graphics, we use a hacky solution of an "ambient light", which just adds some constant lighting everywhere to fill out the shadows and stop them from being black.
What we really want though is to calculate the indirect lighting -- this is the photons that have have been emitted from a light source, have bounced off multiple surfaces before reaching the eye (light->surface->...->surface->eye). This is where other techniques that you've been researching come in. Planar reflections, cube-map reflections (a.k.a. environment maps, or nowadays: IBL) and SSR are all ways to capture a second bounce of light. Using these techniques we first compute light->surface->eye as usual, but we store the results into this environment map. Later on, we can compute environment->surface->eye, which gives us a good approximation of light->surface->surface->eye.
Often these techniques are tied to a particular kind of BRDF -- e.g. planar reflections work best for perfect mirrors, or the Phong specular BRDF. Only recently have decent approximations for the Blinn-Phong BRDF been invented for use with cube-map reflections, etc...