More realistic light effects (Real time)

Started by
11 comments, last by moagstar 18 years, 11 months ago
Hi again, I usually do demos, but the light when using one light source is just too hard and look unrealistic, but then if I try to use more light it gets messy. So here is my question, Is there a way, gold rule or guideline on "how many lights and where to put the lights" so I can get the most realistic rendering. I read somewhere that positioning the light source on top of the model gives a more realistic rendering because people are used to see light from the top(sky). So can anyone point me to more information. By the way I dont mind googling, just let me know what to look for... :) Also I'm just looking for real-time stuff, so please avoid ray-tracing, or any other slow algorithm. Thanks A lot. :)
---------------------------- ^_^
Advertisement
3 point Lighting
Lighting Tips

These tutorials focus on lighting within a 3D modeler like Maya or 3DS Max, but the principles apply everywhere. If you're doing a demoscene, the first one would probably interest you the most.
// The user formerly known as Tojiro67445, formerly known as Toji [smile]
Quote:
So here is my question, Is there a way, gold rule or guideline on "how many lights and where to put the lights" so I can get the most realistic rendering.

GI (Global Illumination).

When using GI, it essentially doesn't matter anymore where you place the lightsources, it will always look 'realistic' (ie. it will look the way a same realworld scene would look, with the light sources at the equivalent real world positions). Making it look 'good' (ie. visually pleasing) is not so much a technical issue, but more an artistic one.

For outside scenes, it is usually a good idea to first create a 'lightdome' around your scene (a hemisphere emitting light), best using an appropriate HDRI map. In a second pass, you add sunlight, using a traditional directional light source. Finally, you apply a GI pass to get the indirect illumination component, including subtle visual clues such as colour bleeding.

As for an appropriate GI algorithm, many different exist: radiosity, photon mapping, Metropolis LT, etc. If you tell us what kind of scenes you are working with, then we can try to find an appropriate algorithm for your application.
I'd suggest entering a beginners photography class. That way you may understand the concepts of illumination. After that you can focus on the technical side.

Luck!
Guimo


Yann L,

Few questions:
Quote:For outside scenes, it is usually a good idea to first create a 'lightdome' around your scene (a hemisphere emitting light), best using an appropriate HDRI map.


What is an HDRI map, how do you generate one and how do you use it in your shader?

Quote:
Finally, you apply a GI pass to get the indirect illumination component, including subtle visual clues such as colour bleeding.


What exactly do you do on a GI pass?
An HDRI is a High Dynamic Range Image. Basically a picture in a floating point (TIFF or EXR) format or RGBE that captures values outside the LDR scale of [0,1]. For more information of HDRI from the guy who made them famous, look here:

http://www.debevec.org/Research/HDR/

Using HDR IBL you can generate extremely realistic images without any artistic ability whatsoever.

A "GI pass" normally means calculating diffuse interreflection between surfaces (which is a subset of a Global Illumination solution). This is commonly done using QMC hemisphere sampling (perhaps in combination with photon mapping), radiosity or a path-tracing variant. Type any of those into google for more information.
Correct if i'm wrong but isn't GI extremly slow? i just remember this prototype graphic chip that could do it in realtime with 20*1.5ghz cpus...
Quote:about 20 fps@36 GHz in 512x512 with 4xFSAA

(quoted from the quake3 raytraced hp:http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/)

regards,
m4gnus
"There are 10 types of people in the world... those who understand binary and those who don't."
playmesumch00ns already answered the questions asked above, so I'll just comment on this one:

Quote:Original post by m4gnus
Correct if i'm wrong but isn't GI extremly slow?

It is certainly not fast, but depending on the implementation, it isn't extremely slow either. Essentially, GI is a precmputation pass. You apply it to a certain 3D scene or environment, wait a few hours (or days in an extreme case), then store the result in a form that suits your realtime application. The most simple one are standard lightmaps, and while they give a very good visual quality, they create what is commonly called the "museum effect": you can walk around, but you cannot touch anything. Basically, they restrict you to static scenes.

More advanced forms of GI storage allow for more interactive scenes, they're commonly referred to as PRT (precomputed radiance transfer) techniques. Directional lightmaps are one example, spherical harmonics another. Basically, the trick is to store not only the incident light amount (as in static lightmaps), but also the incidient directions, ie. the amounts sampled over a hemisphere above the vertex or texel. SH does this by representing the incomming spherical light as a combination of spherical harmonic functions, directional lightmaps store a set of directions explicitely in a compressed format (think a small cubemap per texel). Other alternatives, such as wavelets based approaches, exist as well.

These techniques all serve one purpose: they render the GI solution more dynamic, ie. allow it to be 'reconfigured' in realtime. With SH, for example, it is very easy to rotate a lightsource, such as the skydome, in realtime. More advanced methods allow for moving objects, and realtime soft shadow casting. The newest developments go into the direction of realtime local GI updating, which is basically realtime GI. In the visualization software we currently develop, for example, an architect can move furniture, or even entire walls in a house, and the GI adapts within seconds to the new situation. That's still not 70 fps realtime GI, of course, but it already looks pretty darn good. Of course, you can also use a static GI environment for simplicity, and add traditional direct lighting for the moving objects. In many cases, this will already look quite good. Many current games already use similar systems (eg. HL2).

So in summary, GI is a family of lighting techniques that can generate completely realistic images. It can be used on static geometry without any problems, and becomes more interactive with the help of PRT. In the future, we will most definitely see much more practical, completely realtime GI solutions.
ok by hearing GI i wasn't thinking of lightmaps or advanced lightmaps(like Directional Lightmaps ) but of the raytracng method like in maya or in quake3 raytraced and the GI in maya takes about half an hoer to render with a simple scene. Thx for the info i always wondered what PRT exactly is...

regards,
m4gnus
"There are 10 types of people in the world... those who understand binary and those who don't."
Here's a stupid question, Are you give opengl the normal information for your model? Some light shading on the model requires normals to make thing look good at all.

This topic is closed to new replies.

Advertisement