Jump to content
  • Advertisement
Sign in to follow this  
kbundy

How does game rendering the light and shadow?

This topic is 4495 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am wondering, how does game or any real-time system rendered the lights and cast a shadow? I know in ray tracing we need to cast a ray on each object to see which object got lit and which are in shadow, but it seems very inefficient. Do they actually render the light to static objects during the loading time of the game by cast a ray or something?

Share this post


Link to post
Share on other sites
Advertisement

Lighting and shadowing is a huge topic to be answered lightly.

There are several methods for lighting, but raytracing is usually not efficient enough for real time application.

Raytracing can be used "off-line" to calculate static lighting for the level for example. Calculating raytraced lighting takes relatively long time and is usually done in the level editor, so the actualy map data contains already information about the lighting.

So, usually realtime lighting and shadowing are two different things in game engines. When I say shadowing, I mean cast-shadows, such as realtime shadows maps or stencil shadow volumes. Shadows can be implement as simple subtractive method (as lighting is additive).

Cheers!

Share this post


Link to post
Share on other sites
Quote:
Original post by kbundyI have heard about the light map and shadow map but not sure what that is though


Those are usually textures applied to the objects and represents what lights/shadow brings to the scene. In other words, those are precalculated lighting data.

Share this post


Link to post
Share on other sites
bump maps. That are textures special made to express where the highlights are. This is used a lot in e.g. Oblivion. (at least, that's what it looks like)

~ Stenny

Share this post


Link to post
Share on other sites
Quote:
Original post by stenny
bump maps. That are textures special made to express where the highlights are.


In simple words bump maps are textures used to calculate the normals for the lighting calculations allowing you to create "artificial" bumps on a plane. This creates the illusion that the scene has more polygons that it actually has. As far as I understand the lighting equations, it seems to influence mostly the specular lighting (or highlights).

JFF

Share this post


Link to post
Share on other sites
Quote:
Original post by kbundy


I am wondering, how does game or any real-time system rendered the lights and cast a shadow? I know in ray tracing we need to cast a ray on each object to see which object got lit and which are in shadow, but it seems very inefficient. Do they actually render the light to static objects during the loading time of the game by cast a ray or something?



Real-time ray-tracing is becoming possible on today's hardware. kD-trees are used very often to accelerate tracing through space.

Share this post


Link to post
Share on other sites
Shadowing is an open and huge topic, but today the primary methods for real-time rasterizers is shadowmapping and shadow volumes. The main idea in both is that, for each light, surfaces are lit using whatever equation, excluding those areas that are in shadow for the particular light. The shadowing methods do exactly that: determine which pixels are in shadow and which are not.

1)Shadowmapping: The scene is rendered from the light's POV and the depth information is stored in a texture. Then, when the scene is rendered normally from the viewer's position, every pixel's coordinates are mapped from eye to light space, and depth test is performed against the texture. This way we can determine if a certain pixel is "visible" from the light's position or not.

2)Shadow volumes: First, we calculate the "silhouette" of the shadow caster as it is seen from the point light. Then, we extract the shadow volume. What we need to do is determince which pixels of the scene are enclosed in the volume(ie in shadow) and which are not. Normally this is done by rendering the shadow volume as a normal mesh in conjuction with special operations done in a buffer called "stencil buffer". When all is done, the value of every pixel in the stencil buffer tells us whether that pixel is in shadow or not.

There are hundreds of articles in the net for each one of those method, so just use google if you want to learn more. Each one has its pros and cons, but I think that shadowmapping is more prominent for now, as it's natively done on the GPU(shadow volumes require the use of the CPU, although many calculations can be transferred to the GPU also), its cost in perfomance is more linear, and "soft" shadows are easier to implement with shadowmapping.

There are other methods (for example HalfLife2 used lightmapping for static geometry and some kind of projective texturing for dynamic IIRC) but their quality is not as good.

Lightmapping is irrelevant to shadowmapping. With lightmapping, we just precalculate the amount of lighting in the surfaces and store it in texture(s). Since the calculation is off-line, it can implement any shading algorithm and shadows, but only for static lights against static geometry. The 2 previous methods are for dynamic entities.

As for bumpmapping... I don't even know why it was mentioned here.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!