• Advertisement

Recommended Posts

I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.

Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

Share this post


Link to post
Share on other sites
Advertisement

Quake 3 used a light grid, it's a 3D grid of precomputed lights that it creates during map compile time, then find the closest N lights for the dynamic models that it is rendering.  It then just uses simple vertex lighting from that set of lights for that model.  This is just for dynamic models though, the scene itself is all precomputed lightmaps (though they are toggled/adjusted to achieve more dynamic effects, like a rockets light trail).  The Quake 3 code is readily available online, you can look it over yourself to get an idea of how they did it/

Share this post


Link to post
Share on other sites

And also I think the bot/opponents are being shadowed in the darker areas of the map, although it's harder to tell because they move around very quickly, and also the dynamic lights are lighting them up when they fire their gun at you.

Share this post


Link to post
Share on other sites

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Share this post


Link to post
Share on other sites
On 15.07.2017 at 4:39 PM, Hodgman said:

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Share this post


Link to post
Share on other sites
10 hours ago, afraidofdark said:

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Here are some options how to store directional ambient light, any of them can be called 'probe':

 

One color for each side of a cube, so 6 directions and you interpolate the 3 values fitting a given normal. (Known as Valves ambient cube used in HL2)

Spherical Harmonics. The number of bands you use define the detail, 2 or 3 bands (12 / 27 floats) is enough for ambient diffuse.

Cube maps. Enough details for reflections (used a lot for IBL / PBR today). Lowest LOD == ambient cube.

Dual paraboloid maps (or two sphere maps). Same as cube maps, but needs only 2 textures instead 6.

 

Independent of the data format you choose there remains the question how to apply it to a given sample position. some options:

 

Bounding volume set manually by artist, e.g. a sphere or box with a soft border: You find all affecting volumes per sample and accumulate their contribution.

Uniform grid / multiresolution grids: You interpolate the 8 closest probes. E.g. UE4 light cache.

Voroni Tetrahedronilaztion: You interpolate closest 4 probes (similar to interpolating 3 triangle vertices by barycentric coords for the 2D case). AFAIK Unity uses this.

 

Also the automatic methods often require manual tuning, e.g. a cut off plane to prevent light leaks through a wall to a neighbouring room.

Notice a directionless ambient light used in Quake does not support bump mapping. The 'get ambient from floor' trick works well only for objects near the ground.

There are probably hundrets of papers talking about details.

Share this post


Link to post
Share on other sites
On 7/16/2017 at 8:00 PM, afraidofdark said:

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking.

The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake).

 

Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above.

For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.

Edited by Styves

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By Rannion
      Hi, I am sending data to peers and those data need to be retreived from a scenegraph with a mutex to lock the data.
      The process of gathering the data is taking a bit less than a ms. I'm starting the thread every time I want to gather the data. If I'm running at 60 fps, I'm starting the thread 60 times per second so is that a performance or design problem?
      Would it be much better to have the thread always running and some kind of mechanism to ask him to perform the task whenever it's needed, so around 60 or 120fps?
      Also, does starting a thread creates some memory alloc/dealloc and then produce on the long run some kind of fragmentation?
      Thank you all.
    • By stream775
      Hello!
                I wrote a simple bones system that renders a 3D model with bones using software vertex processing. The model is loaded perfectly, but I can't see any colors on it. For illustration, you can see the 3D lines list, the bones ( 32 bones ) are in correct position ( bind pose ).
       
      Now, here's the problem. When I try to render the mesh with transformations applied then I see this:
      As you can see the 3D lines are disappearing, I'm guessing the model is rendered, but the colors are not visible for whatever reason. I tried moving my camera around the line list, but all I can see is some lines disappearing due to the black color of vertices? I'm not loading any textures, am I suppose to load them?
      However, if I render the vertices without applying ANY bone transformations, then I can see it, but it's a mess, obviously. If you're wondering why it's red, I have set color of these vertices ( only half of them ) to red and the rest half is white.
      First of all, my apologies for the messy code, but here it is:
      I'm not sure if vertices are suppose to have weights in them for software vertex processing. I'm storing them in a container, so you don't see them here.
      #define CUSTOMFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_DIFFUSE ) struct CUSTOMVERTEX { D3DXVECTOR3 Position; D3DXVECTOR3 Normal; DWORD Color; }; This is how I store the vertices in container and give them red and white color:
      This is how I create the device:
      For every frame:
      This is the UpdateSkinnedMesh method:
      I have debugged bone weights and bone indices. They are okay. Bone weights add up to 1.0f, so I'm really wondering why I can't see the model with colors on it?
    • By komires
      We are pleased to announce the release of Matali Physics 4.0, the fourth major version of Matali Physics engine.
      What is Matali Physics?
      Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. The engine is available across multiple platforms:
              Android         *BSD         iOS         Linux         OS X         SteamOS         Windows 10 UAP/UWP         Windows 7/8/8.1/10         Windows XP/Vista What's new in version 4.0?
               One extended edition of Matali Physics engine          Support for Android 8.0 Oreo, iOS 11.x and macOS High Sierra (version 10.13.x) as well as support for the latest IDEs          Matali Render 3.0 add-on with physically-based rendering (PBR), screen space ambient occlusion (SSAO) and support for Vulkan API          Matali Games add-on  
      Main benefits of using Matali Physics:
              Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit)         Advanced samples ready to use in your own games         New features on request         Dedicated technical support         Regular updates and fixes
      The engine history in a nutshell
      Matali Physics was built in 2009 as a dedicated solution for XNA. The first complete version of the engine was released in November 2010, and it was further developed to July 2014 forming multi-platform, fully manage solution for .NET and Mono. In the meantime, from October 2013 to July 2014, was introduced simultaneous support for C++. A significant change occurred in July 2014 together with the release of version 3.0. Managed version of the engine has been abandoned, and the engine was released solely with a new native core written entirely in modern C++. Currently the engine is intensively developed as an advanced, cross-platform, high-performance 3d physics solution.
       
      If you have questions related to the latest update or use of Matali Physics engine as a stable physics solution in your projects, please don't hesitate to contact us.

      View full story
  • Advertisement