Sign in to follow this  
Sk8ash

DX11 Beginning Radiosity

Recommended Posts

Sk8ash    100
Hi I'm currently a games development student about to go into my third and final year and for my project I'm writing a global illumination renderer in DirectX10 using HLSL, the past couple weeks I've been stuck deciding what to use for indirect illumination, I've read countless articles on the radiosity method as well as the different techniques (PRT, instant radiosity, radiosity normal mapping, irradiance volumes etc...). I've read every single forum post on here that mentions radiosity and also MJP's DX11 radiosity method.

I have no idea which technique would be best to use, and when i decide on one I get very confused about where to start, would anybody care to offer help and suggestions ? Cheers

My renderer needs to be able to run at interactive speeds (for games)

Share this post


Link to post
Share on other sites
MJP    19786
Well the first thing you'll need to decide whether you're looking to use precomputed global illumination, or something that works in real time. Also if you want to do the latter there are techniques that work with static geometry but dynamic lighting, as well as techniques that allow for both to be fully dynamic. Deciding on this will narrow down the field considerably.

Share this post


Link to post
Share on other sites
Sk8ash    100
I'm hoping to do static geometry with dynamic lights and then if I get that done maybe check out dynamic geometry. I've been leaning more towards the instant radiosity method with the VPL's.

Share this post


Link to post
Share on other sites
glaeken    294
If you happen to go with instant radiosity with VPLs, I've got an example of that with Nvidia's ray-tracing library Optix and DirectX: http://graphicsrunner.blogspot.com/2011/03/instant-radiosity-using-optix-and.html

Share this post


Link to post
Share on other sites
Sk8ash    100
Cheers, obviously this technique works well with deferred shading, but I'm not looking to build a deferred renderer so does anyone recommend a better technique to use with forward rendering ?

Share this post


Link to post
Share on other sites
MJP    19786
You could do some research into what Geomerics does for Enlighten. As far as I know they precompute form factors for static geometry at regularly-sampled points on a mesh surface (basically light a lightmap), then at runtime they compute the lighting at those points and solve the system of equations using Gauss-Seidel (or something similar). They do that on the CPU, but it's possible to do it on the GPU as well.

Another interesting approach is appoximating surfaces as discs, which was pioneered by Michael Bunnel. There's an older GPU Gems article about it, and there's some descriptions of his updated algorithm from his SIGGRAPH talk last year. It's intended for use with dynamic geometry, but I did some experiments with precomputing visibility for static geometry and the results were promising.

There's also the paper from SIGGRAPH this year about cone-tracing through a voxelized scene. That definitely looked pretty neat, and with static geo you could pre-compute the voxelization.

Share this post


Link to post
Share on other sites
Locater16    100
My personal favorite is the dynamic lightmap generation from Lionhead (a cancelled game called Milo) and Battlefield 3. Lionhead's used spherical harmonics and there presentation was at GDC earlier this year. "Geomerics" and DICE use a not too dissimilar approach (at least in some respects) for Battlefield 3. It gets you dynamic objects lit by "partially" dynamic geometry (you can remove or add the light bouncing geometry, but a bunch of extra stuff has to be calculated if you do).

Battlefield 3 has presentations... everywhere. Just go to DICE's site and you'll find stuff.

Share this post


Link to post
Share on other sites
Sk8ash    100
[quote name='MJP' timestamp='1314040725' post='4852460']
You could do some research into what Geomerics does for Enlighten. As far as I know they precompute form factors for static geometry at regularly-sampled points on a mesh surface (basically light a lightmap), then at runtime they compute the lighting at those points and solve the system of equations using Gauss-Seidel (or something similar). They do that on the CPU, but it's possible to do it on the GPU as well.

Another interesting approach is appoximating surfaces as discs, which was pioneered by Michael Bunnel. There's an older GPU Gems article about it, and there's some descriptions of his updated algorithm from his SIGGRAPH talk last year. It's intended for use with dynamic geometry, but I did some experiments with precomputing visibility for static geometry and the results were promising.

There's also the paper from SIGGRAPH this year about cone-tracing through a voxelized scene. That definitely looked pretty neat, and with static geo you could pre-compute the voxelization.
[/quote]

I took quite a detailed look into the voxel cone tracing method, was extremely impressed and interested by it but im afraid its much too complicated for me, I dont quite understand the Geomerics approach, the only info i could find on it was their talk with Crytek and its quite brief, also I don't know what the system of equations is or Gauss-Seidel. Took a look at the Bunnel approach an understood it, couldn't find much documentation on it past the GPU Gems article though.

Would probably help if I had a really good book on GI, I have one small book but its pretty rubbish, do you know of any really good books ?

Cheers

Share this post


Link to post
Share on other sites
MJP    19786
Well if you haven't already, you definitely want to download the [url="http://people.cs.kuleuven.be/%7Ephilip.dutre/GI/TotalCompendium.pdf"]GI total compendium[/url]. The basic idea behind using a system of equations is that for each sample point, you can compute the lighting of that point as the sum of the lighting at all other points multiplied with the form factor. Together these form a large system of equations that looks like this:

A = B * FFab + C * FFac + D * FFad ...
B = A * FFba + C * FFbc + D * FFbd ...
C = A * FFca + B * FFcb + D * FFcd ...
etc.

That forms a matrix, which you can use to solve that system of equations to get the lighting at each sample point (you probably learned about how to do that in algebra class). Gauss-Seidel is just a method for solving such a system of equations. Altogether that matrix might be very large for a complex scene, and Geomerics deals with that by breaking up the scene into different "zones" where a sample point in one zone is only assumed to be affected by a sample point from within the same zone. They can also compress the matrices because they end up being sparse (lots of zeros due to one sample point not affecting another sample point due to no visibility).

Bunnel gave a talk about his GI tech last year at siggraph, and you get the PDF here: [url="http://cgg.mff.cuni.cz/%7Ejaroslav/gicourse2010/index.htm"]http://cgg.mff.cuni.cz/~jaroslav/gicourse2010/index.htm[/url]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
    • By fighting_falcon93
      Imagine that we have a vertex structure that looks like this:
      struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; The vertex shader looks like this:
      cbuffer MatrixBuffer { matrix world; matrix view; matrix projection; }; struct VertexInput { float4 position : POSITION; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInput main(VertexInput input) { PixelInput output; input.position.w = 1.0f; output.position = mul(input.position, world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.color = input.color; return output; } And the pixel shader looks like this:
      struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; float4 main(PixelInput input) : SV_TARGET { return input.color; } Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex B. vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f); vertices[1].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex D. vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f); vertices[3].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // 1st triangle. indices[0] = 0; // Vertex A. indices[1] = 3; // Vertex D. indices[2] = 2; // Vertex C. // 2nd triangle. indices[3] = 0; // Vertex A. indices[4] = 1; // Vertex B. indices[5] = 3; // Vertex D. This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles:

      Now imagine that we’d want our quad to have a different color in vertex A:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C:
      // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B.
      Which brings us to my question:
      I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles?
      An illustration of what I'm trying to achieve is shown in the image below:

       
      Background
      This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem.
      I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape:

      As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.
    • By -Tau-
      Hello, I'm close to releasing my first game to Steam however, my game keeps failing the review process because it keeps crashing. The problem is that the game doesn't crash on my computer, on my laptop, on our family computer, on fathers laptop and i also gave 3 beta keys to people i know and they said the game hasn't crashed.
      Steam reports that the game doesn't crash on startup but few frames after a level has been started.
      What could cause something like this? I have no way of debugging this as the game works fine on every computer i have.
       
      Game is written in C++, using DirectX 11 and DXUT framework.
    • By haiiry
      I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
      void extractBitmap(void* texture) { if (texture) { ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture; ID3D11Texture2D* pNewTexture = NULL; D3D11_TEXTURE2D_DESC desc; d3dtex->GetDesc(&desc); desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB; HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture); if (FAILED(hRes)) { printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str()); if (hRes == DXGI_ERROR_DEVICE_REMOVED) printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str()); } else { if (pNewTexture) { D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); // Wokring with texture pNewTexture->Release(); } } } return; } D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer)); extractBitmap(pBackBuffer); pBackBuffer->Release(); Crash log:
      CreateTexture2D FAILED:887a0005 DXGI_ERROR_DEVICE_REMOVED -- 887a0020 Once I comment out 
      D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); 
      code works fine on all 3 PC's.
    • By Fluffy10
      Hi i'm new to this forum and was wondering if there are any good places to start learning directX 11. I bought Frank D Luna's book but it's really outdated and the projects won't even compile. I was excited to start learning from this book because it gives detailed explanations on the functions being used as well as the mathematics. Are there any tutorials / courses /books that are up to date which goes over the 3D math and functions in a detailed manner? Or where does anyone here learn directX 11? I've followed some tutorials from this website http://www.directxtutorial.com/LessonList.aspx?listid=11 which did a nice job but it doesn't explain what's happening with the math so I feel like I'm not actually learning, and it only goes up until color blending. Rasteriks tutorials doesn't go over the functions much at all or the math involved either. I'd really appreciate it if anyone can point me in the right direction, I feel really lost. Thank you
  • Popular Now