Recommended Posts

I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.

Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

Share this post


Link to post
Share on other sites

Quake 3 used a light grid, it's a 3D grid of precomputed lights that it creates during map compile time, then find the closest N lights for the dynamic models that it is rendering.  It then just uses simple vertex lighting from that set of lights for that model.  This is just for dynamic models though, the scene itself is all precomputed lightmaps (though they are toggled/adjusted to achieve more dynamic effects, like a rockets light trail).  The Quake 3 code is readily available online, you can look it over yourself to get an idea of how they did it/

Share this post


Link to post
Share on other sites

And also I think the bot/opponents are being shadowed in the darker areas of the map, although it's harder to tell because they move around very quickly, and also the dynamic lights are lighting them up when they fire their gun at you.

Share this post


Link to post
Share on other sites

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Share this post


Link to post
Share on other sites
On 15.07.2017 at 4:39 PM, Hodgman said:

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Share this post


Link to post
Share on other sites
10 hours ago, afraidofdark said:

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Here are some options how to store directional ambient light, any of them can be called 'probe':

 

One color for each side of a cube, so 6 directions and you interpolate the 3 values fitting a given normal. (Known as Valves ambient cube used in HL2)

Spherical Harmonics. The number of bands you use define the detail, 2 or 3 bands (12 / 27 floats) is enough for ambient diffuse.

Cube maps. Enough details for reflections (used a lot for IBL / PBR today). Lowest LOD == ambient cube.

Dual paraboloid maps (or two sphere maps). Same as cube maps, but needs only 2 textures instead 6.

 

Independent of the data format you choose there remains the question how to apply it to a given sample position. some options:

 

Bounding volume set manually by artist, e.g. a sphere or box with a soft border: You find all affecting volumes per sample and accumulate their contribution.

Uniform grid / multiresolution grids: You interpolate the 8 closest probes. E.g. UE4 light cache.

Voroni Tetrahedronilaztion: You interpolate closest 4 probes (similar to interpolating 3 triangle vertices by barycentric coords for the 2D case). AFAIK Unity uses this.

 

Also the automatic methods often require manual tuning, e.g. a cut off plane to prevent light leaks through a wall to a neighbouring room.

Notice a directionless ambient light used in Quake does not support bump mapping. The 'get ambient from floor' trick works well only for objects near the ground.

There are probably hundrets of papers talking about details.

Share this post


Link to post
Share on other sites
On 7/16/2017 at 8:00 PM, afraidofdark said:

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking.

The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake).

 

Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above.

For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.

Edited by Styves

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      628277
    • Total Posts
      2981776
  • Similar Content

    • By Rannion
      Hi,
      I'm trying to fill a win64 Console with ASCII char.
      At the moment I have 2 solutions: one using std::cout for each line, let's say 30 lines at once using std::endl at the end of each one.
      The second solution is using FillConsoleOutputCharacter. This method seems a lot more robust and with less flickering. But I'm guessing, internally it's using a different table than the one used by std::cout. I'm trying to fill the console with the unsigned char 0xB0 which is a sort of grey square when I use std::cout but when using FillConsoleOutputCharacter it is outputted as the UTF8 char '°'.
      I tried using SetConsoleOutputCP before but could not find a proper way to force it to only use the non-extended ASCII code page...
      Has anyone a hint on this one?
      Cheers!
    • By Vortez
      Hi guys, i know this is stupid but i've been trying to convert this block of asm code in c++ for an hour or two and im stuck
      ////////////////////////////////////////////////////////////////////////////////////////////// /////// This routine write the value returned by GetProcAddress() at the address p /////////// ////////////////////////////////////////////////////////////////////////////////////////////// bool SetProcAddress(HINSTANCE dll, void *p, char *name) { UINT *res = (UINT*)ptr; void *f = GetProcAddress(dll, name); if(!f) return false; _asm { push ebx push edx mov ebx, f mov edx, p mov [ebx], edx // <--- put edx at the address pointed by ebx pop edx pop ebx } return res != 0; } ... // ie: SetProcAddress(hDll, &some_function, "function_name"); I tried:
      memcmp(p, f, sizeof(p)); and UINT *i1 = (*UINT)p; UINT *i2 = (*UINT)f; *f = *p; The first one dosent seem to give the right retult, and the second one won't compile.
      Any idea?
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By adapelin
      I am a computer engineering student and i have the assignment below. İ only can write the 2D maze array and have no idea about creating car and time as well. Could anyone write and explain hot to do???
      Minimum Criteria: You are expected to design the game by using C ++ . Below are the minimal criteria: • You must create game board with 2 - Dimensional Matrix • Bonuses create with randomly in the game board • All bonuses have got the same value but different effect for car and score . These effects may be positive or negative . • You must use pointer for creating and using car . Some bonuses may be change car type. • When the game finish, you must show high - score. • For moving car , you need to create coordinate s randomly and you need to write proper control statements. • You must use functions for drawing game board and changing car type . If you need extra functions, you can use it. • If you cannot get out the maze when the time is up , the game is over and you need to show high score. In this project, you must do all minimum criteria. In the end, your program must be work without any errors. Bonus: • Save and load high score information to/from disk • Each bonus has got different random values. • You can create cheat codes for the game. • You can create alternative control for car . • Car can jump over the wall but may lose the score . When car exit the maze , game is over and you need to show high score.    
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
  • Popular Now