Recommended Posts

afraidofdark    278

I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.

Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

Share this post


Link to post
Share on other sites
xycsoscyx    1150

Quake 3 used a light grid, it's a 3D grid of precomputed lights that it creates during map compile time, then find the closest N lights for the dynamic models that it is rendering.  It then just uses simple vertex lighting from that set of lights for that model.  This is just for dynamic models though, the scene itself is all precomputed lightmaps (though they are toggled/adjusted to achieve more dynamic effects, like a rockets light trail).  The Quake 3 code is readily available online, you can look it over yourself to get an idea of how they did it/

Share this post


Link to post
Share on other sites
benjamin1441    128

And also I think the bot/opponents are being shadowed in the darker areas of the map, although it's harder to tell because they move around very quickly, and also the dynamic lights are lighting them up when they fire their gun at you.

Share this post


Link to post
Share on other sites
Hodgman    51341

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Share this post


Link to post
Share on other sites
afraidofdark    278
On 15.07.2017 at 4:39 PM, Hodgman said:

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Share this post


Link to post
Share on other sites
JoeJ    2592
10 hours ago, afraidofdark said:

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Here are some options how to store directional ambient light, any of them can be called 'probe':

 

One color for each side of a cube, so 6 directions and you interpolate the 3 values fitting a given normal. (Known as Valves ambient cube used in HL2)

Spherical Harmonics. The number of bands you use define the detail, 2 or 3 bands (12 / 27 floats) is enough for ambient diffuse.

Cube maps. Enough details for reflections (used a lot for IBL / PBR today). Lowest LOD == ambient cube.

Dual paraboloid maps (or two sphere maps). Same as cube maps, but needs only 2 textures instead 6.

 

Independent of the data format you choose there remains the question how to apply it to a given sample position. some options:

 

Bounding volume set manually by artist, e.g. a sphere or box with a soft border: You find all affecting volumes per sample and accumulate their contribution.

Uniform grid / multiresolution grids: You interpolate the 8 closest probes. E.g. UE4 light cache.

Voroni Tetrahedronilaztion: You interpolate closest 4 probes (similar to interpolating 3 triangle vertices by barycentric coords for the 2D case). AFAIK Unity uses this.

 

Also the automatic methods often require manual tuning, e.g. a cut off plane to prevent light leaks through a wall to a neighbouring room.

Notice a directionless ambient light used in Quake does not support bump mapping. The 'get ambient from floor' trick works well only for objects near the ground.

There are probably hundrets of papers talking about details.

Share this post


Link to post
Share on other sites
Styves    1792
On 7/16/2017 at 8:00 PM, afraidofdark said:

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking.

The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake).

 

Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above.

For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.

Edited by Styves

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Partner Spotlight

  • Similar Content

    • By MarcusAseth
      I don't know why this is happening, in the image below you can see my current situation.
      Basically I have an Utility.h header which contains wathever random stuff I think is useful at the time but don't yet deserves it's own separate file, so inside of it I've put a BaseFont class and defined it in place since I am not sure it will stay into Utility.h for long (Font is an alias for BaseFont because I wanted the internal variable name for the font to be Font, and that would collide with the class name, so that's why I have it setup like this) and right below this class, still in the header, I have a global object constructed from it so that I can use it everywhere for debug since pretty much all my .cpp #include "Utility.h". Now when I close the game I get an error that says "Breakout.exe stop working" and I tracked it down to the BaseFont destructor. On the right of the image you can see the local variable situation at that break point, which doesn't looks good...
      I have some assumptions regarding the cause but I am not quite sure, can you tell me what its going wrong exactly?
      FULL SIZE IMAGE

    • By MarcusAseth
      Hi guys, I'm not quite sure how to achieve this
      Here what I got so far, my GameMode class own an object of type Physics:
      class Physics { private://variables GameMode* Game; public://constructors Physics(GameMode* gameRef); ~Physics(); public://methods bool IsColliding(Entity* current, ECollisionTest collisionTestType); private://methods bool BoxCollisionTest(const Rect& current, const Rect& other); }; And since all my entities have a pointer to the GameMode they can retrieve a pointer to the Physics object, so when they move they can query the Physics object to see if they collided with something, passing "this" as the object to control against all the Entities inside GameMode, like this:
      //Collision Check if (PhysicsManager->IsColliding(this, ECollisionTest::EBoxCollisionTest)) { LogConsole("Collision"); } and IsColliding() goes like this:
      bool Physics::IsColliding(Entity* current, ECollisionTest collisionTestType) { for (auto& E : Game->Entities) { if (current != E.get())//avoid self testing { switch (collisionTestType) { case ECollisionTest::EBoxCollisionTest: { if (BoxCollisionTest(current->GetCollisionBox(), E->GetCollisionBox())) { return true; } } break; } } } return false; } Now, there are two things I don't like about my current setup.
      1) I am testing against all the entities. The paddle of the Breakout game can only move left and right, and hit either a ball or the left/right boundary, makes no sense to test it against 50 bricks at every update.
      2)From inside the Paddle class I am calling "IsColliding(this, ECollisionTest::EBoxCollisionTest)" so there is the assumption that all the entities that can collide with this one require a box test. May not be the case, maybe I have a "worm" entity running on the screen and it requires another type of collision test, so the choosen function to run shouldn't relying on me explicitly saying which one is, but it should be called at runtime trough overload, based on the type of Entity being compared. So maybe the Paddle<->Ball comparison would do a BoxCollisionTest, while Paddle<->Worm comparison would do something else.
      So how do I get it to work the way I want?
      This is what I want:
      1)Physics object is working with Entity*, and yet I need it to being able to distinguish between a Paddle* and or a Brick* or a Boundary* or a Ball* (all of this inherit from entity), a bit kind of like Unreal Engine BlueprintEditor Cast node, which allow me to say "Cast to Door" and if the thing was a door it return success, otherwise fails. Can you show me an example of how such mechanism is built, in code(C++)?
      2)Physics object should just call a generic TestCollision() function between the "this" passed (the current colliding object) and all the other entities, and then the overload resolution calls the appropriate function based on the types. And yet this shouldn't fail to compile for pairs for which I don't explicitly overload, for instance I wouldn't overload for Paddle<->Brick because such comparison will never happen, so when Paddle<->Brick comparison is performed, it should just return false even though I never declared a TestCollision(Paddle* p, Brick* b). 
      How can this be achieved?
       
       
       
    • By cozzie
      Hi,
      During the journey of creating my 2nd 3D engine, I'm basing quite some of the approaches on the Game Engine Architecture book (Jason Gregory). I've now arrived on the topic: IO.
      In short, the target platforms for me are PC, XBone and PS4.
      With this main assumption I thought of the following logics/ guidelines to follow:
      - I can assume file/folder structures will work on all 3 platforms, when I use '/' as a folder separator
      -- I will define 1 global with the base folder for the application, the rest will 'inherit' from there
      (which can be anything, independent of the 'mount' or drive)
      -- for now I'll create 1 subfolder with data/files that might need write access, so later on I only have to worry about 1 subfolder (settings, configs etc.).
      - file extensions can be longer than 3 characters (in Linux based FreeBSD on PS4)
      - all class members functions needing to load a file, shouldn't have to now about the file structure of the logical application, so they will all take a full filename string including path
      (combining root + subfolder + filename and separators is then the responsibility of the caller/ calling code)
      - some functions will need to be passed a folder, because contents in that folder need to be read OR I can simply define a small list of defined subfolders (under root/ base), because it won't be more then 5 to 10 folders in total (data/shaders, data/textures, data/objects, data/sound etc.)
      My questions:
      - what do you think about this approach?
      - regarding the last point, which of the 2 options would you apply?
      -- option 2 might work fine but feels a bit 'static', not very flexible (on the other hand, would you actually need flexibility here?)
      Any input is appreciated, as always.
    • By Outliner
      In ordinary marching squares, we're trying to find isolines on a height map for some particular height. It's a delightfully simple algorithm because we can use a look-up table to determine the structure of edges and vertices within each grid cell based on whether each corner is above or below the desired height.
      In multi-material marching squares, each point on the grid has some proportion of several materials and we're trying to draw the boundaries between the areas where each material is dominant. This is less simple, since there are more than two options for each corner of each cell; at worst each corner could have a distinct dominant material. Even so, it's not too hard to approach this problem with a look-up table based on the corners of each cell.
      Finally, we have constrained multi-material marching squares, which is much like other constrained triangulation problems. In addition to the multi-material grid, we now have pre-defined boundary edges in some of the grid cells, and the multi-material marching squares must respect those pre-defined edges as if they accurately represent the boundary between two materials. I'm finding it hard to wrap my head around this problem. It seems that a look-up table will be of no use because the pre-defined edges create too many possibilities, even if those edges are restricted to the kinds of edges that marching square would naturally produce, but doing this without a look-up table also seems daunting.
      Motivation: In principle the goal seems quite simple. Take a 2D grid and use it to define terrain as a height map and as a material map that will form the foundation for a procedurally constructed mesh. Aside from the usual hills and valleys of a plain height map, the multi-material aspect of the grid allows us to define swamp, forest, desert regions on the map and apply particular procedural meshing for each. In addition to that, we want vertical cliffs that get their own special meshing and define the region boundaries. The cliffs are the constraints of constrained multi-material marching squares because when there is a cliff running through a grid cell, that should always act as the boundary if the material at the top of the cliff is different from the material at the bottom, even if marching squares would have naturally put the boundary somewhere else.
    • By RubenRS
      How do i open an image to use it as Texture2D information without D3DX11CreateShaderResourceViewFromFile? And how it works for different formats like (JPG, PNG, BMP, DDS,  etc.)?
      I have an (512 x 512) image with font letters, also i have the position and texcoord of every letter. The main idea is that i want to obtain the image pixel info, use the position and texcoords to create a new texture with one letter and render it. Or am I wrong in something?
  • Popular Now