Sign in to follow this  

"Next-gen" lighting systems

This topic is 3724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, It's been a while since I did graphics programming, so I wonder if I'm still up-to-date. I want to pickup my hobby game-project again, but I wonder if I should stick with my current lighting system, "radiosity normalMapping". Which is a lightmapping technique that allows (fake) normalMapping by calculating 3 lightmaps (each "pixel" on the lightmap is measured from 3 directions, instead 1). Or in other words, the technique Halflife2 uses as well. However, adding anything dynamic is a well known problem for lightMaps. The normalMapping itself isn't always very impressive either(also no support for specular like you can do with Phong or something), and I have some problems with the detail. It takes a 6(!) hours to create a 1024x682 (3 together is 1024x2048) HDR lightMap, and I still don't have that much detail. I know, it can be done faster and better. But maybe its time to switch over to a dynamic approach. I've seen some "next-gen" games like Bioshock. I don't know what kind of lighting system they use, but it looks damn good. I think its dynamic, but it also has soft shadows, not those sharp-edged shadows from Quake4 or F.E.A.R. (which did soft-shadows, but still looked too sharp). The shadows in Bioshock are way more smooth or "smudged", which looks great: http://media.insidegamer.nl/screenshots/public/8201/75701.jpg So, what kind of techniques do these games use to get that? Like I said, I've been living under a rock for almost a year, so I might missed some "hot new technologies" :) Greetings, Rick

Share this post


Link to post
Share on other sites
From what I've seen of BioShock, the shadows are shadowmaps with some nice filtering to give a softer feel along with some low-res static lightmaps for general background/ambient lighting. Quite how they manage to do all that plus their insane amount of post processing amazes me though.

Share this post


Link to post
Share on other sites
Ok, but how do games like these perform normalMapping? The same way like HL2, or something else? Or would they use both, something like:
- basic lightMapping / decal texture pass
- normal / specular mapping pass for nearby light 1
- normal / specular mapping pass for nearby light 2
- normal / specular mapping pass for nearby light n

? I don't know if its a good idea to mix realtime lighting with pre-calculated lighting though... But the normalMapping effect is obviosly there. In HL2 you see much less of it, because its "fake". As far as I know you need to do realtime lighting for the normalMapping effect, since you need to know light positions/colors/ranges.

As for the shadows...
http://www.sme.sk/cdata/1780250/bioshock_02_s_b.jpg
Maybe its me, but I can't get these detailed shadows in a lightMap. Or would all objects (like that glass tube) use shadow maps like you mentioned?

-edit
It seems Bioshock is using the Unreal 3 engine:
http://www.unrealtechnology.com/html/technology/ue30.shtml
What exactly does this means:

Ultra high quality and high performance pre-computed shadow masks allow
offline processing of static light interactions, while retaining fully dynamic
specular lighting and reflections.

Directional Light Mapping enables the static shadowing and diffuse
normal-mapped lighting of an unlimited number of lights to be precomputed
and stored into a single set of texture maps, enabling very large light
counts in high-performance scenes.

Im especially interested in the way how they combine dynamic with pre-calculated lighting in a proper way. Simply doing a lightmap pass and add or multiply that with a pass that used realtime lighting, is not going to give good results I think... I assume a "directional map" is somewhat the same as they use in HL2 (where teach lightMap pixel has been measured from 3 directions in that case, so that "fake" normalMapping can be used). But how to combine that with realtime lighting?

Greetings,
Rick

[Edited by - spek on September 18, 2007 4:30:45 PM]

Share this post


Link to post
Share on other sites
I would imagine how they handle the lighting would depend on the shader model being used. I know some games like Far Cry would do multiple lights in one pass when in SM3.0, which allows for some pretty lengthy uber-shaders.

As for Bioshock, it doesn't sound like they're doing anything too fancy there. I may be missing something, but it sounds like they're just describing static lightmaps in a way that makes it sounds like they're something else. Being that the game is SM3.0 only, its likely they're doing what FarCry does and blending together both the lightmaps and the dynamic lights in one pass (I don't remember there ever being a large amount of dynamic lights effecting any single surface).

I don't see why you couldn't do something similar, and probably ditch the old bump-mapping technique. You could just do it right in the shader before performing your lighting.



Share this post


Link to post
Share on other sites
Well, one of the problems might be the (HDR) coloring. Imagine a room with a blue light. It will produce a blue-ish lightmap. In pass 1, I would apply that lightmap. After that, I perform dot3 lighting with attenuation in a second pass. This pass will also deliver a blue-ish result. So far so good, but can I just multiply these 2 results?

- How to deal with HDR? Now I'm using HDR with the help of the lightMap. A pixel from the lightmap might exceed the {1,1,1}(white) value by far. If the realtime lighting is also HDR, I could get enormous values.

- The result in the lightMap is not the same as in the realtime pass. Some area's might be way to dark with realtime lighting, while it's relative light in the lightMap because indirect lighting via radiosity. In the worst case black pixels from the realtime pass will "black-out" the inderectly litten pixel as well. The same might also happen with the attenuation. Lights in the lightMap might shine further/less far than in the realtime pass.

- How to deal with shadowcasting? If I have a pillar in that example room, the lightMap will have a shadow somewhere from that pillow. With realtime lighting, the blue light will shine right through that pillar, so blue pixels from the realtime pass are multiplied with dark pixels from the lightMap, while there shouldn't be blue there at all. Unless I use stencil shadows or something to black-out that part behind the pillar maybe?

- Dynamic lighting: Its easy to enable/disable lights in realtime lighting. But the lightmap doesn't change of course. I could switch to another lightmap (for a part), but it still is not very dynamic of course (although Bioshock isn't that dynamic either with the lighting).

The result doesn't have to be 100% accurate of course. But I worry about strange bugs, since a radiosity lightMap produces another result than a realtime litten scene (without stencil shadows). Multiplying these 2 might give a nice average between the two, but mauybe it might also cause some weird lighting...


Funny that you mention good old Farcry, I was thinking about that as well. But as far as I know, their lightMaps aren't 100% normal either. I believe they used some sort of technique as well to tell from what direction the light comes from per pixel. But I'm not sure though, maybe it really is just basic lightMapping like Quake2 did as well. Maybe someone gots more information about that?

Maybe it's not really true what I'm all saying though, like I said, I'm not really up-to-date with lighting techniques anymore :)

Greetings,
Rick

[Edited by - spek on September 18, 2007 6:59:24 PM]

Share this post


Link to post
Share on other sites
Hmmm...are you saying that for any 1 surface, you might have a lightmap and a dynamic light for the same light source? I'd assume that the lightmap would contain the contribution for 1 more static light source, and then any dynamic light sources would calculated and the result summed with the lightmap values. I'm not very familiar with light maps, perhaps I'm misunderstanding something?

Share this post


Link to post
Share on other sites
I'm neither familiar with the combination between the two, so maybe its not making sense what I say. But yes, I think that is what I want, the lightsources are used twice (in the lightMap and dynamically), and that's why I worry about mixing 2 different things.

I could tell via an editor which lights are static (=lightMap) and which are dynamic (=realtime). But how to do that exactly? Again, imagine a room with 1 single light, and a pillar in the center. If I mark that light as dynamic, I get my normalMapping / specular lighting working, but the lightMap would be completely black there(=no effect). In that case the pillar won't cast a shadow, and there is no indirect (radiosity) lighting either.

If I mark that light as static, which it basically is since 99% of my lights won't move or change, I get a nice lightMap for that room. But no normalMapping or specular lighting, since the normalMapping shader won't get any dynamic lights (position/color/strength).

My game will be pretty dark for most parts (weak lights), most surfaces are only lit by 1 or 2 lights directly. So I really need that lightMap (or another realistic way) to do the shadows and the indirect lighting. But I also like to have the normalMapping working.

Greetings,
Rick

Share this post


Link to post
Share on other sites
I think basically you split up your lighting into a diffuse component, a specular component and dynamic shadows. You get the diffuse component from your (directional) lightmap(s) and calculate the specular part of the phong shading in the pixelshader. Now for dynamic shadows I guess you will get lots of harsh seams between the soft shadows encoded in the lightmaps and the hard shadowmapped shadows, so some clever trickery will be needed to integrate this and hide the artifacts.

There's also a paper on HL2's shading, I think on ATI's developer sites, that probably explains some details.

Share this post


Link to post
Share on other sites
Hmmm... Doing the diffuse by the directional Lightmap (what I already do now) combined with realtime specular lighting... Not a bad idea I think. However, the normalMapping effect for non-shining materials is still not much, since its done by a static lightMap. If I move a light over a non-shiny concrete wall for example, the light will barely have any effect on that surface.

I never seen dynamic shadows mixed with lightMap shadows though (I think), except for dynamic objects such as characters, lockers, barrels, and so on. Halflife2 for example isn't doing any dynamic lighting on the static environment (except for your flashLight), only the objects are using realtime lighting from 2 nearby sources (according to that paper you mentioned).

Dynamic objects are not the problem for now. Basically I want a realtime lighting solution for a static environment, but with soft shadows/indirect lighting. I know realtime radiosity and that kind of stuff is not really possible yet for games, but I suppose there are some tricks and combinations between the two techniques, like Farcry did, and Bioshock probably as well.

Thanks for helping,
Rick

Share this post


Link to post
Share on other sites
you called this thread " "Next-gen" lighting systems" so I have to wonder why the discussion is about Halflife2 style lighting.. which looked out-of-date even when it was released.

I reccomend abandoning lightmapping altogether and going with realtime shadow maps.

Share this post


Link to post
Share on other sites
I don't like the term "next-gen", but I wondered what kind of techniques current games like Bioshock are using. Whatever kind of technique it used, it looks good in my opinion. Some say its still using old techniques (a mix between a realtime lighting and a basic lightMap --> Farcry), in the Unreal 3 engine description I saw "directional" lightmaps (HL2 uses those as well, that's why its mentioned), and you mention shadow-maps. I don't know what is being used... That's why I ask. If its a combination between a lightMap and realtime lighting, then I'd like to know how to do this properly.

Greetings,
Rick

Share this post


Link to post
Share on other sites
What a lot of games seem to be doing these days is to fake indirect lighting with pre-baked ambient occlusion maps. This allows for dynamic lighting and yet doesn't produce the typical dynamic lighting look.

Here's what a couple of recent games are doing with lighting (all to my knowledge):

S.T.A.L.K.E.R.: Deferred shading, pre-baked ambient occlusion, realtime soft-shadows (uses something like penumbra maps)

HL2 Ep. 2: Forward shading, lightmaps (HDR, normal mapped), phong shading on moving geometry, shadow mapping (used to be projected shadows)

Killzone: Deferred shading, pre-baked ambient occlusion (?), realtime soft-shadows, pre-computed sun shadows

Crysis: Forward shading, realtime ambient occlusion, realtime soft shadows


Traditional lightmaps seem to have gone out of style (dunno if UT 3 is still using them, but I seriously doubt it).
Obviously, lightmaps have the advantage that they scale nicely, but S.T.A.L.K.E.R. for example seems to run just fine on SM 2.0 hardware (it lacks the nice shadows but it still looks pretty good). However using pre-baked ambient-occlusion won't buy you a lot in pre-processing, because it still takes a while to create them. The approach Crysis is taking with realtime AO looks very interesting and elliminates the need for pre-processing altogether (which is why their Sandbox editor provides real WYSIWYG). To me, this is one of the best reasons for moving away from lightmaps.

Share this post


Link to post
Share on other sites
What exactly means "forward shading"?

And what is an "occlusion map"? Like I said, my eyes were closed last year :)

Crysis indeed looks very interesting. In the nVidia SDK, there was a realtime ambient occlusion demo I believe, but it runned pretty slow on my GeForce 6600. That card is getting a little bit old by now, but is that technique already suitable (combined with all other effects like normalMapping, HDR, and so on) for modern cards? I don't know when Crysis will be released, but probably it should run on nowadays cards. That would be great, skipping the whole pre-generated stuff.

Thanks for the info,
Rick

Share this post


Link to post
Share on other sites
Quote:
Original post by Harry Hunt
What a lot of games seem to be doing these days is to fake indirect lighting with pre-baked ambient occlusion maps. This allows for dynamic lighting and yet doesn't produce the typical dynamic lighting look.

Here's what a couple of recent games are doing with lighting (all to my knowledge):

HL2 Ep. 2: Forward shading, lightmaps (HDR, normal mapped), phong shading on moving geometry, shadow mapping (used to be projected shadows)


Very true. The shadowing algorithm has changed a lot since the first incarnation of the HL2 engine.

Quote:
Original post by Harry Hunt
Killzone: Deferred shading, pre-baked ambient occlusion (?), realtime soft-shadows, pre-computed sun shadows


There's an article up on Killzone's entire lighting engine (and tons of references). Its indirect lighting is particularly impressive.

Quote:
Original post by Harry Hunt
Crysis: Forward shading, realtime ambient occlusion, realtime soft shadows


Crysis names its ambient occlusions maps "Realtime Ambient Maps" or RAMs. However, they are in fact precomputed, but dynamically adapted in-game using portal information. The actual real-time ambient occlusion contribution stems from "Screen-space Ambient Occlusion" or SSAO.

Quote:
Original post by Harry Hunt
Traditional lightmaps seem to have gone out of style (dunno if UT 3 is still using them, but I seriously doubt it).


Lightmaps have several disadvantages that (offline) ambient occlusion maps successfully address, being:
1) LMs contain lighting information, whereas AOMs store an occlusion factor; this means that lights can change colour and/or intensity in real-time without the maps having to be updated to reflect this;
2) LMs don't work with per-pixel lighting techniques such as normal mapping, whereas AOMs can be made to do so (using a "bent normal" for example), albeit by approximation.

I'm more or less interested in Bioshock's behind-the-scenes lighting engine myself.

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
What exactly means "forward shading"?


forward is the traditional approach (the antonym would be deferred shading) where you don't separate shading from rendering the geometry (this is probably a really poor explanation)

Quote:

And what is an "occlusion map"? Like I said, my eyes were closed last year :)


As mentioned in the previous post, ambient occlusion maps store the occlusion factor for each texel. If for example you look at a wall that has cracks in it, less light will reach the inside of the cracks which is why they will be darker (more occluded). This is sort of like shadowing but unlike shadowing, ambient occlusion doesn't have a directional component, so the information stored inan ambient occlusion map will work regardless of the lights' positions. When rendering a scene with ambient occlusion, what you normally will do is to multiply the ambient color with the value stored in the ambient occlusion map (this will essentially reduce the influence of the ambient color for geometry that is occluded).

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
What exactly means "forward shading"?

And what is an "occlusion map"? Like I said, my eyes were closed last year :)


Forward shading, as opposed to deferred shading, theoretically lights every surface using every light when it's rasterized to the backbuffer. The important difference between forward shading and deferred shading is "overdraw". Deferred shading is a screen-space algorithm, shading only those fragments that made it to the backbuffer in the end. To that end, deferred shading makes use of G-buffer, containing all the necessary geometric information that forward shading has access to at prime-time. Forward shading is often a multi-pass algorithm (although it doesn't have to be, nor does it require every surface to be lit by every light). In any case, either has its distinct advantages, look around for other posts that dwell onto this.

An occlusion map stores an occlusion factor per lumel. Contrary to a lightmap, this factor signifies the ratio of light that can reach the surface point, under a hemisphere. If you consider this ratio to be normalized, 0 would mean no light ever reaches the surface, and 1 would tell us there is no occluder to stop light from reaching the surface (e.a. it's fully lit).

Basically, it's a flexible scaling factor for lighting equations, and yields low-frequency light details (along the lines of very cheap GI).

Quote:
Original post by spek
Crysis indeed looks very interesting. In the nVidia SDK, there was a realtime ambient occlusion demo I believe, but it runned pretty slow on my GeForce 6600. That card is getting a little bit old by now, but is that technique already suitable (combined with all other effects like normalMapping, HDR, and so on) for modern cards? I don't know when Crysis will be released, but probably it should run on nowadays cards. That would be great, skipping the whole pre-generated stuff.


That real-time dynamic ambient occlusion is a demo, and not ready for prime-time just yet. There are other techniques (for example, Crysis' SSAO) that yield considerably better performance characteristics at admittedly lower quality.

Share this post


Link to post
Share on other sites
Thanks, and thanks!

That's what I mean, I turn my back for a little while, and the game-programming world is full of new terms I never heard about. But "forward shading" is just the "normal way" to render things (draw a couple of polygons, apply its parameters/textures, shade it)? Time to search some demo's and try it myself. Does someone know some nice demo's about this stuff?

I suppose an occlusion-map will be generated just like a lightMap (although the information you store aren't colors, and the calculation is different). Are there any generators for that, or do I need to write one my own? In the second case, I need to calculate the occlusion factors for the surface lumels... Would that generate a grayscale image where white pixels are the fully lit ones? And when is a lumel exactly fully lit? And eventually the "bent normal" is stored in each pixel?

If there are 10 lightSources placed in a large world, none of the lumels will ever receive all the lights. But I suppose it doesn't really matter, as long as if there is a (bright) light nearby the shines directly on that lumel. But if I calculate it that way, I'm doing just the same as with a lightMap, except for the colors. I think I don't 100% understand it yet though...

And when applying such a map on my world, how to combine it with the lights? Is it realtime lighting, but then I also use that map to check how much of its light can reach that pixel? But what if there are multiple lights... or do I need an occlusion map for each surface-light combination (which could explain the question above)?

[edit]
I did some reading, and it becomes a little bit more clear now. But the examples are shooting rays "outside the world" (towards the sky), and then check how much % did not collide. In my case the locations will be indoor in most cases, so rays will almost always collide. Does that mean I need to shoot those rays towards the lightSources, instead of the "sky"? That could be somewhat difficult though, since point lights don't have a volume... But I could make spheres/rectangles of them of course. Anyway, if that's the case, I could end up with receiving light from multiple sources, which means multiple light colors/directions/"bent normals". How to deal with that? ...Or, is this not really the purpose of AO, and do I still need to add another technique for the diffuse lighting part?

Again, thanks for the help!
Rick

[Edited by - spek on September 20, 2007 2:16:49 AM]

Share this post


Link to post
Share on other sites
Ambient occlusion is completely independent of your actual light sources. This is also its primary advantage: You will only need one AO texture regardless of how many lights there are in your scene.

You shouldn't think of AO as a lightmap, more as a texture that holds additional information you use in your dynamic lighting equation. Usually, when doing phong shading for example, there are three components that make up the light information of a single luxel: diffuse, specular and ambient. When using AO, diffuse and specular pretty much stay the same, the only thing that really changes is the ambient part. Instead of just using a constant ambient value, or one you sample from a spatial structure, you'd use whatever ambient color you have and multiply it with the value read from the AO-texture.

There are several tools for generating AO maps. You could use the PRT functions in D3DX although that would probably be quite tricky. Alternatively, a lot of 3d modelling packages have a render-to-texture function and most of them also support baking AO.

Check out this pic:



This pretty much explains what AO is: In that pic, you can't really say where the light source is, because light effectively comes from all directions. Still, the image has a very good depth to it and even small details stand out nicely - this is what AO will help with.

(Sorry about the saggy...you know what...)

Share this post


Link to post
Share on other sites
I am very interested in this topic. I got some useful info in the topic I started about it a while ago (linky), which got me as far as implementing a spherical harmonic based global illumination renderer. This technique only works for directional lights (not point or spot), so is really limited to outdoor use, but I would have to recommend it as the shader overhead is tiny (just a few dot products in the vertex shader), and the vertex data overhead is manageable (9 coefficients per vertex for 3rd order s.h, which can probably be compressed).
Free Image Hosting at www.ImageShack.us
Free Image Hosting at www.ImageShack.us
(please excuse the poor texturing, but you should just be able to make out the indirect lighting effects and the dynamic super soft shadows!)
Currently I am implementing the paper found here, which seems promising, and expands the global illumination to work with point lights (and even spots), at the cost of a lot of dot products in the vertex shader. Still they quote 30fps on a 100'000 poly scene on a gf6600. So my 8800 should eat it for breakfast :D.
Still trying to find anybody else who has implemented this, or is interested in it, but it seems like the only true GI algorithm out there that handles dynamic lights.

Share this post


Link to post
Share on other sites
Reminds me of my good old grandma...

Yeah, I kinda figured out that AO is an additional thing to improve the lighting with more realistic ambient, not a replacement for other (diffuse) lighting techniques, right?. Nevertheless, its an interesting technique and it comes at a relative low cost. But... how to deal with indoor scenes? In my case, the rays will always collide sooner or later if there are now windows. I could use a maximum distance for the rays (when traveled > x metres, it counts), but maybe there are better solutions?


That still leaves me with the original question. I also took a look into deffered shading. Sounds nice as well (except for some issues like memory, AA and transparency), but I still need shadows. I could combine it with stencil shadows in order to have a fully dynamic solution. But I don't want to loose my soft smooth shadows I get with a radiosity lightMap.

In other words, I'm searching for a nice combination that allows realistic shadows, but still has some dynamic behavuar (flickering lights, switching lights on/off, eventually small moving lights), and especually a solution that fits better with normal/specular Mapping. Like they managed in Bioshock... Or maybe old Farcry was a good example as well, where dynamic lighting was combined with a Lightmap in a proper way (how?).

@bluntman
Nice shots! But I need an indoor solution indeed. That paper showed some cool stuff, but my poor math knowledge always makes it hard to read those kind of papers. Maybe there are examples / demo's that you know of?

thanks for the info,
Rick

[Edited by - spek on September 20, 2007 9:01:19 AM]

Share this post


Link to post
Share on other sites
Hmmm... Been playing Bioshock and watched carefully...

I think the basic lighting is indeed just a lightMap, maybe a directional one like HL2 did (the Unreal 3 engine supports that, so why not). There is also pretty much dynamic lighting though (electricity from your hands, neon commercials, flickering lights, etc.), but I noticed they did not cast shadows(except on dynamic objects such as barrels, enemies and desks). You won't notice it, since the lightMap already does alot, and most of the lightsources are pretty weak or only have a small range. But you'll never say sharp (stencil) shadows from the static environment. Another little "bug" was that the shadows from objects always come from the same direction. It seems they pick the strongest lightSource, and only use that one to cast.

The specular lighting is dynamic. In fact, I think lights contribute to the lightMap, but are also used for realtime (diffuse and specular) lighting on the environment. This way you can still fool around with the lighting, and the normalMapping is sharper. Some lights are not used in the Lightmap I think, so its easy to switch them on/off (although they miss proper shadows in that case).

I'm not sure though, but I think this is the way how Bioshock is rendered. Does it sound logical? Ifso, then I'm still interested in they way how they blend everything together. A lightMap pass isn't heavy (although when using normalMapping the way HL2 did...), but the dynamic lights must also added. I suppose everything could be done in 1 pass with SM3.0, like someone mentioned earlier (but how? Can SM3.0 handle an fixed size array of lighting parameters?). I also wonder if the coloring would still be ok. If the lightMap is colored red from a red lightSource, and later the same red light is also summed or multiplied again from the dynamic part? Maybe it helps to saturate the lightMap or something... Don't know, I'm throwing around everything that comes up into me, excuse me for that :)

Greetings,
Rick

Share this post


Link to post
Share on other sites
Since everyone is helping me here, I'm glad I can offer a little help here too.

Read this, it helped me alot. I always have problems with technical papers, but even I managed to do it:
http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm

First you need to know how to build a basic lightMap. That means you create an texture where you can put your lightMap on (see my other post); you unwrap everything from your (static) geometry. You give it (second) texture coordinates for that lightMap texture.

Then you need to divide your surfaces into "lumels". A lumel is just a tiny spot on the surface that will be used to measure how much light is coming in there. For example, use the lightMap you created before to define the lumels on the surface (each lightMap pixel could be a lumel on the surface for example).

When you got your lumels, you can loop through them, and check how much light they receive. You could this with raytracing, or by rendering the surrounding scene and measure the average incoming light (this is how the paper does it).
Now you got yourself a traditional lightMap.

When doing radiosity, the process repeats itself x times. After lighting in pass 1, every lumel becomes a potentional lightSource (think about a piece of metal that reflects light). Just repeat the process, and use the results from the previous pass to render the environment again where the lumel color =
incoming_Light * material_reflectance + material_emmisive
Now your lumels not only catch light from the lightSources, but also from surrounding lumels (which is indirect lighting). Repeat this until you are satisfied with the result, and then store everything in a lightmap image(s).

Success,
Rick

Share this post


Link to post
Share on other sites
Thanks a lot spek. But iam using mentalray for maya to computing lightmap. I think what you say here and in that paper is a bout Algorithm of the lightmaping in general. I dont see any thing about computing 3 lightmap in tangent space to use them for Radiosity Normal Mapping.
I start this topic about it

http://www.gamedev.net/community/forums/topic.asp?topic_id=464745

and also find several other paper but still confused.

This is one of my HDR lightmap parallex spec map example. But i use only a single lightmap as you can see.

http://www.gamedev.net/community/forums/topic.asp?topic_id=433194

Another project http://www.ali-rahimi.net/projects/x_fridge/x_fridge.htm

buy the way i played a pc dx 9.0 version of the Bioshock too. I dont think its lighting is to much fancy. As an artistic point on view there is a lots of hack in lighting. It just overloaded with post effect and glow. Sometimes you thing you are in a Sparkler Carnival.



Share this post


Link to post
Share on other sites
Sorry, I forgot about the normal Mapping part. But that is just a matter of expanding your lightMap generator.

I made a generator myself (a slow one though) that first rendered a basic LightMap like described earlier. In the last pass, I make 3 lightMaps (in my case 3 light directions are used for the normalMapping effect). Instead of letting each lumel watch forward to measure the incoming light, I point them into a specific direction (see that base vector in the Halflife2 shading sheets). So, in LightMap1 the lumels measure light that comes from the left-bottom, in LM2 from the right-bottom, and in LM3 from above. In the end, I get 3 different lightMaps.

When rendering the geometry in the game, a shader combines the colors from the 3 lightMaps based on its normalMap. For example, if the normal points upwards, it could use 70% from LM3, 15% from LM1, and 15% from LM3. I don't know about any generators for Maya though, since I made it myself.

Bioshock indeed seems to mix a lot a techniques that aren't that special on itself. But the end-result is impressive. I wonder how everything is blended together (static (lightmap) lighting width dynamic lighting).

Greetings,
Rick

Share this post


Link to post
Share on other sites

This topic is 3724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this