Sign in to follow this  
joeblack

DX11 Dynamic ibl

Recommended Posts

joeblack    303

Hi guys,

i implemented ibl in my engine. Currently i precompute cubemaps offline and use them in game. This works good, but its only static. I would like to implement dynamic cubemap creation and convolution. I more or less know how to do it. But : My current workflow is : Render hdr cubemap in 3dsmax with mental ray (white material for everything). Convolute with ibl baker. Use it in game. Capture probe ingame (only once). Convolute with ibl baker and use it without changing. This is used for every "ambient" light in game. On top of that I'm rendering "normal" light (with ambient and specular).

I would like to capture and convolute cubemaps dynamically in game. So capture cubemap in 3ds max once. Use It in game and generate cube maps there at some time. This sounds easy. But as I said I first render ambient lights and on top of that normal lights. Then I create cubemap from that and use it in next frame for ambient light and add normal lights... Creating infinite feedback. Is there any way around it ? I believe games are using reatime generated ibl cubemaps. Or it's done completely differently ?

Share this post


Link to post
Share on other sites
Hodgman    51328
3 hours ago, joeblack said:

Creating infinite feedback.

The feedback loop is actually just "bounce lighting". As long as your materials obey the physical rule of conservation of energy, it will be ok. Every time through the loop, most of the energy gets lost, and exponentially more and more gets lost every iteration. 

Share this post


Link to post
Share on other sites
FreneticPonE    3293
6 hours ago, Hodgman said:

The feedback loop is actually just "bounce lighting". As long as your materials obey the physical rule of conservation of energy, it will be ok. Every time through the loop, most of the energy gets lost, and exponentially more and more gets lost every iteration. 

Not really, this leads into the "local lighting" infinite bounce trap. Light won't "travel" throughout the level correctly unless you iterate over every single cubemap therein, which you don't really want to do. So you end up with pockets of extreme brightness where the light bounces around next to ones of extreme darkness. You also have iteration time lag, so when you start it's very dark and the longer you hang around the brighter (though less exponentially) it gets as each iteration bounces more light. Still, it can be very annoying looking, as there's a literal "lag" to light and it's travelling very slowly somehow.

The general idea is doable however! The only full shipped version I'm aware of is Call of Duty Infinite Warfare with their Fast filtering of reflection probes and the Rendering part. There's several strategies you could choose from, but all of them ditch the idea of taking the previous cubemap lighting results and re-applying them infinitely and recursively.

One is only using local and sun light for lighting each probe at runtime. You'd only get one "bounce" but you could render and ambient light as well. Another is rendering the ambient term into the reflection probes, then just using the reflection probes for the final pass and no ambient there. But this can lead to odd colorbleeding results that don't look good.

A hack could be as so: Light your cubemap with an ambient term, take the resulting hdr cubemap and re-light the original, unlit cubemap with it once. This should provide an approximation of multiple light bounces and smooth out any weird color/lightbleeding artifacts that come from doing only one "ambient" bounce. As long as you smoothly blend between cubemaps for both spec/diffuse I'd suspect there wouldn't be much "boundary" artefacts where inappropriate dramatic lighting changes happen.

That being said check out the rendering parts separate spherical harmonic ambient occlusion like term. The idea is to take a higher resolution, precomputed sample of global illumination results. And then where that differs from the sparser cubemap information bake the difference into a greyscale spherical harmonic, so naturally dark areas don't get lit up inappropriately because the cubemap isn't correct, and vice versa. It's a hack, but an effective one.

Edit  - The Witcher 3 also does some sort of dynamic cubemap thing. But I'm not entirely sure how it works and I don't think they ever said.

Edited by FreneticPonE

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
    • By fighting_falcon93
      Imagine that we have a vertex structure that looks like this:
      struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; The vertex shader looks like this:
      cbuffer MatrixBuffer { matrix world; matrix view; matrix projection; }; struct VertexInput { float4 position : POSITION; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInput main(VertexInput input) { PixelInput output; input.position.w = 1.0f; output.position = mul(input.position, world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.color = input.color; return output; } And the pixel shader looks like this:
      struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; float4 main(PixelInput input) : SV_TARGET { return input.color; } Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex B. vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f); vertices[1].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex D. vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f); vertices[3].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // 1st triangle. indices[0] = 0; // Vertex A. indices[1] = 3; // Vertex D. indices[2] = 2; // Vertex C. // 2nd triangle. indices[3] = 0; // Vertex A. indices[4] = 1; // Vertex B. indices[5] = 3; // Vertex D. This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles:

      Now imagine that we’d want our quad to have a different color in vertex A:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C:
      // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B.
      Which brings us to my question:
      I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles?
      An illustration of what I'm trying to achieve is shown in the image below:

       
      Background
      This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem.
      I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape:

      As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.
    • By -Tau-
      Hello, I'm close to releasing my first game to Steam however, my game keeps failing the review process because it keeps crashing. The problem is that the game doesn't crash on my computer, on my laptop, on our family computer, on fathers laptop and i also gave 3 beta keys to people i know and they said the game hasn't crashed.
      Steam reports that the game doesn't crash on startup but few frames after a level has been started.
      What could cause something like this? I have no way of debugging this as the game works fine on every computer i have.
       
      Game is written in C++, using DirectX 11 and DXUT framework.
    • By haiiry
      I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
      void extractBitmap(void* texture) { if (texture) { ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture; ID3D11Texture2D* pNewTexture = NULL; D3D11_TEXTURE2D_DESC desc; d3dtex->GetDesc(&desc); desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB; HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture); if (FAILED(hRes)) { printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str()); if (hRes == DXGI_ERROR_DEVICE_REMOVED) printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str()); } else { if (pNewTexture) { D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); // Wokring with texture pNewTexture->Release(); } } } return; } D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer)); extractBitmap(pBackBuffer); pBackBuffer->Release(); Crash log:
      CreateTexture2D FAILED:887a0005 DXGI_ERROR_DEVICE_REMOVED -- 887a0020 Once I comment out 
      D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); 
      code works fine on all 3 PC's.
    • By Fluffy10
      Hi i'm new to this forum and was wondering if there are any good places to start learning directX 11. I bought Frank D Luna's book but it's really outdated and the projects won't even compile. I was excited to start learning from this book because it gives detailed explanations on the functions being used as well as the mathematics. Are there any tutorials / courses /books that are up to date which goes over the 3D math and functions in a detailed manner? Or where does anyone here learn directX 11? I've followed some tutorials from this website http://www.directxtutorial.com/LessonList.aspx?listid=11 which did a nice job but it doesn't explain what's happening with the math so I feel like I'm not actually learning, and it only goes up until color blending. Rasteriks tutorials doesn't go over the functions much at all or the math involved either. I'd really appreciate it if anyone can point me in the right direction, I feel really lost. Thank you
  • Popular Now