Sign in to follow this  
Sk8ash

DX11 Beginning Radiosity

Recommended Posts

Hi I'm currently a games development student about to go into my third and final year and for my project I'm writing a global illumination renderer in DirectX10 using HLSL, the past couple weeks I've been stuck deciding what to use for indirect illumination, I've read countless articles on the radiosity method as well as the different techniques (PRT, instant radiosity, radiosity normal mapping, irradiance volumes etc...). I've read every single forum post on here that mentions radiosity and also MJP's DX11 radiosity method.

I have no idea which technique would be best to use, and when i decide on one I get very confused about where to start, would anybody care to offer help and suggestions ? Cheers

My renderer needs to be able to run at interactive speeds (for games)

Share this post


Link to post
Share on other sites
Well the first thing you'll need to decide whether you're looking to use precomputed global illumination, or something that works in real time. Also if you want to do the latter there are techniques that work with static geometry but dynamic lighting, as well as techniques that allow for both to be fully dynamic. Deciding on this will narrow down the field considerably.

Share this post


Link to post
Share on other sites
I'm hoping to do static geometry with dynamic lights and then if I get that done maybe check out dynamic geometry. I've been leaning more towards the instant radiosity method with the VPL's.

Share this post


Link to post
Share on other sites
If you happen to go with instant radiosity with VPLs, I've got an example of that with Nvidia's ray-tracing library Optix and DirectX: http://graphicsrunner.blogspot.com/2011/03/instant-radiosity-using-optix-and.html

Share this post


Link to post
Share on other sites
Cheers, obviously this technique works well with deferred shading, but I'm not looking to build a deferred renderer so does anyone recommend a better technique to use with forward rendering ?

Share this post


Link to post
Share on other sites
You could do some research into what Geomerics does for Enlighten. As far as I know they precompute form factors for static geometry at regularly-sampled points on a mesh surface (basically light a lightmap), then at runtime they compute the lighting at those points and solve the system of equations using Gauss-Seidel (or something similar). They do that on the CPU, but it's possible to do it on the GPU as well.

Another interesting approach is appoximating surfaces as discs, which was pioneered by Michael Bunnel. There's an older GPU Gems article about it, and there's some descriptions of his updated algorithm from his SIGGRAPH talk last year. It's intended for use with dynamic geometry, but I did some experiments with precomputing visibility for static geometry and the results were promising.

There's also the paper from SIGGRAPH this year about cone-tracing through a voxelized scene. That definitely looked pretty neat, and with static geo you could pre-compute the voxelization.

Share this post


Link to post
Share on other sites
My personal favorite is the dynamic lightmap generation from Lionhead (a cancelled game called Milo) and Battlefield 3. Lionhead's used spherical harmonics and there presentation was at GDC earlier this year. "Geomerics" and DICE use a not too dissimilar approach (at least in some respects) for Battlefield 3. It gets you dynamic objects lit by "partially" dynamic geometry (you can remove or add the light bouncing geometry, but a bunch of extra stuff has to be calculated if you do).

Battlefield 3 has presentations... everywhere. Just go to DICE's site and you'll find stuff.

Share this post


Link to post
Share on other sites
[quote name='MJP' timestamp='1314040725' post='4852460']
You could do some research into what Geomerics does for Enlighten. As far as I know they precompute form factors for static geometry at regularly-sampled points on a mesh surface (basically light a lightmap), then at runtime they compute the lighting at those points and solve the system of equations using Gauss-Seidel (or something similar). They do that on the CPU, but it's possible to do it on the GPU as well.

Another interesting approach is appoximating surfaces as discs, which was pioneered by Michael Bunnel. There's an older GPU Gems article about it, and there's some descriptions of his updated algorithm from his SIGGRAPH talk last year. It's intended for use with dynamic geometry, but I did some experiments with precomputing visibility for static geometry and the results were promising.

There's also the paper from SIGGRAPH this year about cone-tracing through a voxelized scene. That definitely looked pretty neat, and with static geo you could pre-compute the voxelization.
[/quote]

I took quite a detailed look into the voxel cone tracing method, was extremely impressed and interested by it but im afraid its much too complicated for me, I dont quite understand the Geomerics approach, the only info i could find on it was their talk with Crytek and its quite brief, also I don't know what the system of equations is or Gauss-Seidel. Took a look at the Bunnel approach an understood it, couldn't find much documentation on it past the GPU Gems article though.

Would probably help if I had a really good book on GI, I have one small book but its pretty rubbish, do you know of any really good books ?

Cheers

Share this post


Link to post
Share on other sites
Well if you haven't already, you definitely want to download the [url="http://people.cs.kuleuven.be/%7Ephilip.dutre/GI/TotalCompendium.pdf"]GI total compendium[/url]. The basic idea behind using a system of equations is that for each sample point, you can compute the lighting of that point as the sum of the lighting at all other points multiplied with the form factor. Together these form a large system of equations that looks like this:

A = B * FFab + C * FFac + D * FFad ...
B = A * FFba + C * FFbc + D * FFbd ...
C = A * FFca + B * FFcb + D * FFcd ...
etc.

That forms a matrix, which you can use to solve that system of equations to get the lighting at each sample point (you probably learned about how to do that in algebra class). Gauss-Seidel is just a method for solving such a system of equations. Altogether that matrix might be very large for a complex scene, and Geomerics deals with that by breaking up the scene into different "zones" where a sample point in one zone is only assumed to be affected by a sample point from within the same zone. They can also compress the matrices because they end up being sparse (lots of zeros due to one sample point not affecting another sample point due to no visibility).

Bunnel gave a talk about his GI tech last year at siggraph, and you get the PDF here: [url="http://cgg.mff.cuni.cz/%7Ejaroslav/gicourse2010/index.htm"]http://cgg.mff.cuni.cz/~jaroslav/gicourse2010/index.htm[/url]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628308
    • Total Posts
      2981975
  • Similar Content

    • By GreenGodDiary
      SOLVED: I had written 
      Dispatch(32, 24, 0) instead of
      Dispatch(32, 24, 1)  
       
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By mister345
      Does buffer number matter in ID3D11DeviceContext::PSSetConstantBuffers()? I added 5 or six constant buffers to my framework, and later realized I had set the buffer number parameter to either 0 or 1 in all of them - but they still all worked! Curious why that is, and should they be set up to correspond to the number of constant buffers?
      Similarly, inside the buffer structs used to pass info into the hlsl shader, I added padding inside the c++ struct to make a struct containing a float3 be 16 bytes, but in the declaration of the same struct inside the hlsl shader file, it was missing the padding value - and it still worked! Do they need to be consistent or not? Thanks.
          struct CameraBufferType
          {
              XMFLOAT3 cameraPosition;
              float padding;
          };
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
  • Popular Now