How was lighting handled in early 3D games?

Started by
6 comments, last by Marscaleb 6 years, 1 month ago

When I was younger, I basically looked at some of the big title like Quake and Unreal and thought that they represented basically how all video games worked.  In recent years I've come to understand that I was wrong in those assumptions.

One particular thing that I was thinking about today was lighting.  In Quake and Unreal a developer would place lights in the scene and calculate the lighting, and then bake that data into the scene as lightmaps.  I thought that was how all 3D engines worked, just with calculations that didn't look as good.  But now I'm looking at several old games and noticing...  I don't think that's how they work.

A few particular titles I'm looking at are ones on the Nintendo 64; Goldeneye, Turok 2, Perfect Dark.  But when I notice the lighting in these games they don't tend to have the same fading and fall-off I see from Unreal and Quake 2.  In fact, in Goldeneye the lighting is usually at a constant level, except for dark spots that might as well have been hand-painted.  And it occurs to me that if that's the lighting in your game, then trying to compute lighting and saving that data as a lightmap is an abhorrent waste of space.  But if it wasn't lightmap data, then how was the lighting handled?

How was lighting handled in these early 3D games?  What was used to figure out where there were shadows and where there was not?

Read my webcomic: http://maytiacomic.com/
Follow my progress at: https://eightballgaming.com/

Advertisement

I just looked a video of Golden Eye, and from what i can tell...

Lighting is precalculated, similar to Quake. But there are many objects like doors or boxes that are not baked so they look wrong and the game does not appear as consistent as Quake. Maybe they also use gradient textures and place them manually so they look like lightmaps. This way you still have some shading and save texture memory. And maybe a lot of the lighting is per vertex and interpolated over the triangle, like first Tomb Rider. Gouraud shading was the term. Id software tended to use just one technique for everything (Quake: Every surface baked, Doom3: Only dynamic light, shadows for anything), while others mixed everything that was available (as Epic did, they allowed to use shadow maps just for characters and additionally mix static and dynamic lighting, which often caused glitchy results.)

 

Lightmaps take up a lot of space. Early Nintendo systems in particular, with their cartridges, didn't have much space. That might've restricted some devs from using them. 

You can still bake lighting, but then store the results per vertex instead of in a texture map. This generally results in a lot less data. You also have a choice of bit-depth and RGB vs greyscale. e.g. 24bit color or even 4bit greyscale. 

Modern APIs have a lot of restrictions on data formats, such as requiring 4 but alignment of vertex attributes, but many older games did things in software where you can do what you like. IIRC, quake 1 stored precomputed occlusion in a compressed format and decompressed on the fly for visibility queries,which is the kind of thing you can do with software but not on fixed function hardware. 

Other games would calculate lighting dynamically. Always per vertex or even per object back then -- per pixel lighting is normal now, but a big deal when games first started doing it. You might have just one light applied to each object, with no specular, and a fixed ambient term. That costs around one dot product per vertex and some simple additions / multiplies, which early fixed function hardware was good at. There's lots of tricks to merge many lights together, so you can have many in a scene but still only compute one per vertex. 

On the wii, we did a trick where we allocated a tiny square texture per dynamic (moving) object, e.g. 32x32 and would draw a sphere into it, which was lit by many lights (diffuse only), and then drew some sprites into it to fake specular highlights. Then, when drawing the actual object, we did no lighting calculations besides fetching from that texture using the view-space normal as a texture coordinate. This let us have many lights per object, fake per-pixel lighting and even normal maps, but very low cost. We also had "lightmaps" baked into the vertices of the static (non-moving) objects to capture ambience/shadows. Dynamic objects would also ray-trace downwards from their centre, and multiply their lighting by the baked colour of the triangle that the ray hit. 

16 hours ago, JoeJ said:

I just looked a video of Golden Eye, and from what i can tell...

[...] Maybe they also use gradient textures and place them manually so they look like lightmaps. This way you still have some shading and save texture memory.

I know, right?  I recall a lot areas (particularly in the multiplayer maps) where the lighting just didn't make any sense, and I wonder if they actually painted the lighting values in by hand.  Honestly, it's not completely unreasonable, since most areas have fairly consistent lighting.  With such little lighting detail, painting lighting values by hand sounds like it would be easier than programming an extensive lighting computation system.

Was this something that people actually did back in the day?

Unreal may have been a little more glitchy but it still looked better.  Come to think of it, that's really always been the norm.  Compare the Doom engine to the Build engine; Id's product was more stable and firm but Id's competition looked better because it could do a little more.

 

16 hours ago, Hodgman said:

On the wii, we did a trick where we allocated a tiny square texture per dynamic (moving) object, e.g. 32x32 and would draw a sphere into it, which was lit by many lights (diffuse only), and then drew some sprites into it to fake specular highlights. Then, when drawing the actual object, we did no lighting calculations besides fetching from that texture using the view-space normal as a texture coordinate. This let us have many lights per object, fake per-pixel lighting and even normal maps, but very low cost. We also had "lightmaps" baked into the vertices of the static (non-moving) objects to capture ambience/shadows. Dynamic objects would also ray-trace downwards from their centre, and multiply their lighting by the baked colour of the triangle that the ray hit. 

Hearing stories like that is why I love asking questions on this site.  I love hearing about all these really-used in-the-trenches tricks.

Also, I didn't even think about using a lower bit-depth for the lighting data.  That makes a lot of sense too!  Plus, even Quake 2 did that, sort of.  I remember now reading about how for the software renderer it took the brightest value from the RGB channels and used that.  Plus the lighting data was stored (or maybe just processed?  Can't recall exactly) the lighting data in just a little lower bit-depth than what it displayed.

Read my webcomic: http://maytiacomic.com/
Follow my progress at: https://eightballgaming.com/

7 hours ago, Marscaleb said:

Compare the Doom engine to the Build engine; Id's product was more stable and firm but Id's competition looked better because it could do a little more.

Don't forget that Duke Nukem 3D was released 3 years after DOOM. Half a year before id released Quake. Of course Build engine games looked better at that time.

8 hours ago, Marscaleb said:

With such little lighting detail, painting lighting values by hand sounds like it would be easier than programming an extensive lighting computation system.

Was this something that people actually did back in the day?

It's quite an interesting question, i assume many devs simply where not interested in correct baked lighting. At that time this idea was not only new, but also, maybe it was not what many of them wanted to do.

I remember very well how i personally perceived this:

Quake was a nice game, but levels where monochrome, right angled, looking pretty boring.

MDK was the perfect game: Colorful, crazy, diversified, creative. High frame rate at high resolution.

Only many years later i recognized at all what global illumination is and how Quake pioneered this into games. MDK on the other hand has no lighting at all. I just did not recognize this as a player, and as a programmer i worried only about how to avoid drawing hidden triangles, but i did not care about lighting back then.

 

Nowadays it's different. Games are so detailed that we need realistic lighting to get out of uncanny valley. What is very sad about this is, with constant technical progress towards realism, we can no longer use and spur the players imagination. Thinking back the early days, playing an Atari game was looking at the great box art, then looking at the square on the screeen and in your mind it became an awesome spaceship. Maybe it's just because i'm older now but i assume we have lost this great aspect of video games over the years.

16 hours ago, rnlf_in_space said:

Don't forget that Duke Nukem 3D was released 3 years after DOOM. Half a year before id released Quake. Of course Build engine games looked better at that time.

True, but we're talking about the engine.  Duke 3D came out three months after Hexen and another four months before Strife,  (and barely more than a year after Doom 2,) so it is still competing with the Doom engine, and the Doom engine had plenty of time to incorporate new features (which it did) to keep it relevant.
But describing it as "looking better," I will admit, is the wrong way to describe it.  Build had rendering abilities that Doom was utterly incapable of, even that modern source ports can't replicate.  It's not about it being more technically advanced but that it demonstrates Id tends to focus their engines on being solid over dynamic.

Read my webcomic: http://maytiacomic.com/
Follow my progress at: https://eightballgaming.com/

This topic is closed to new replies.

Advertisement