Sign in to follow this  
rouncED

DX11 deferred rendering and tesselation

Recommended Posts

rouncED    103
so, I plan on using deferred rendering, but I also want dx11 tesselation.

Maps I need are depth maps, normal maps, diffuse maps, and coordinate maps.

I figure I could do 4 passes of the actual geometry to get front faces, back faces of a worldspace coordinate render and diffuse render, then I could make the normal and depth maps from the worldspace coordinate render in image space.

I would then have all the maps I needed... only problem is, it took 4 whole renders to get there, that means I had to tesselate the whole scene over 4 times to do it...

Is there some way to do the tesselation once and get colour and coordinate maps at the same time?

Even maybe front faces and back faces?

Share this post


Link to post
Share on other sites
Hodgman    51326
Usually, depth (position), normal and diffuse are all rendered at the same time using MRT (multiple render-targets). This gives you a back-buffer with many layers, so you can write diffuse to one layer, normal to another, etc...

Why do you need to render out both the front and the back faces for?

Share this post


Link to post
Share on other sites
DarkChris    110
If you're doing it right, you would only need the depth and the normals. And both of these should be rendered in just a single pass by using multiple render targets...

Share this post


Link to post
Share on other sites
rouncED    103
Ah yes! So there is a way!
Thanks alot for telling me this... so you set each render target to its own pixel shader, and you tesselate "once" and you get multiple effects out of the single render.

Hodgeman, I need back faces to compute my ambient occlusion algorythm (which is more realistic than ssao, but requires more work) and sub surface scattering, they both need back faces.

I have another question tho, and im still pretty far from understanding this properly...
Can even the vertex shader be different between render targets and you would still only tesselate once?

Cause id still like to twist the idea to somehow get back faces out of it, and also, if you could do that, you could get a shadow map render out of it as well, which is also very important.

Share this post


Link to post
Share on other sites
DarkChris    110
You don't need backfaces for subsurface scattering... You need translucent shadow maps... And if you swear that you won't tell anyone about it, I could give you my new ambient occlusion technique that improves the efficiency of both SSAO and baked AO in a way that they are both superior than PRT without any noticeable additional cost (fps stays the same).
Also take a look at the Light Pre-Pass technique... It requires way less RAM than deferred rendering... And you won't even need a buffer for albedo (textures)...

The multiple render targets are only affected by the pixel shader... You just need to calculate multiple colors in the pixel shader and define a PS_Out struct with multiple colors (COLOR0, COLOR1, ...)

[Edited by - DarkChris on October 16, 2010 7:05:51 AM]

Share this post


Link to post
Share on other sites
rouncED    103
Oh thats a bummer, only different pixel shader methods are allowed with MRT...

Even if you used translucent shadow maps, youd still have to render more than once.

Im interested, id like to check out your ao technique, got any screen shots or movies?

If its less quality than this im sorta not interested tho, cause I want quality over speed.


Since id like back faces im sorta stuck, cause even front facing polys would tesselate to create back facing triangles, so I dont know what to do - is back face culling done before or after tesselation?

Share this post


Link to post
Share on other sites
DarkChris    110
Quote:
Original post by rouncED
Oh thats a bummer, only different pixel shader methods are allowed with MRT...


Since when, it works with shader model 1, 2 and 3. I can't think of an explanation why they should've changed it for shader model 5?!

Quote:
Original post by rouncED
Even if you used translucent shadow maps, youd still have to render more than once.


But using the backfaces is not how it's meant to work. You have to use a ray that goes from your surfaces point through the object in the direction of the light, not in the view direction.

Quote:
Original post by rouncED
Im interested, id like to check out your ao technique, got any screen shots or movies?


The technique is not meant to be used instead of yours. It's meant to improve yours even more. It can easily be implemented in any engine and provides more realistic lighting / shadowing and not better ambient occlusion. I'm currently trying to improve it even more by adding a global illumination effect, but that drastically decreases the fps (compared to almost 0fps of the standard implementation).

Here are a few pics:



And a vid:
Youtube - Unlimited Engine - Shadow Casting Ambient Occlusion

I don't know if I want to release the implementation yet, since I don't know how and if I can take any advantages from it. I definitely will release it at some point in the future.

Share this post


Link to post
Share on other sites
rouncED    103
I see it goes really fast, but the ambient occlusion isnt as good as mine, yours looks harder than raytraced, my image is a lot softer. But probably yours is more useful, cause who wants to play a game at 12fps at 640 480... i know... i know...

Share this post


Link to post
Share on other sites
Hodgman    51326
Quote:
Original post by rouncED
so you set each render target to its own pixel shader, and you tesselate "once" and you get multiple effects out of the single render.
You can't set a render target to a pixel shader...
When you render geometry, it uses a pixel shader.
Normally, the pixel shader outputs a single (float4) value, which is stored in the render-target.
With MRT, your pixel shaders can output more than one value (e.g. 4 x float4's), which are stored in the many layers of the render-target.

Share this post


Link to post
Share on other sites
Matias Goldberg    9580
You can always use Stream Out to reuse the tessellated vertices with different pixels shaders.
Although that may beat the point of using a tessellator, because it's main purpose is to increase geometry detail without memory bandwidth and cache inefficiency problems caused by large vertex buffers.
On the bright side, using stream out you're only using the VS once. The net benefit can only be found through experimentation

Cheers and good luck
Dark Sylinc

Share this post


Link to post
Share on other sites
DarkChris    110
Quote:
Original post by rouncED
I see it goes really fast, but the ambient occlusion isnt as good as mine, yours looks harder than raytraced, my image is a lot softer. But probably yours is more useful, cause who wants to play a game at 12fps at 640 480... i know... i know...


It's nothing that would replace your ambient occlusion. But it improves the way of how the ambient occlusion gets applied to the lighting. Mine uses a pretty shitty baked ambient occlusion map (Blender is so bad) and absolutely no SSAO or shadow mapping. The standard way would be to multiply the ambient occlusion with the lighting which is referred in my pictures as "standard ambient occlusion". This causes some areas to be unlittable even if a light directly shines onto these areas. And my SCAO not only solves this, it also adds dynamic soft shadows for an infinite amount of lights without the need for shadow maps. It in no way tries to outshine your AO technique, it would just make it more efficient in combination with real lights.

Share this post


Link to post
Share on other sites
macnihilist    377
On a side Note:
Quote:

The standard way would be to multiply the ambient occlusion with the lighting


In general only the _ambient_ light is attenuated by _ambient_ occlusion (although I know there can be reasons to do it otherwise). What you are doing sounds like _directional_ occlusion for direct lighting. Here is a relatively recent screen space approach. Maybe it can help you to improve your method.

Share this post


Link to post
Share on other sites
DarkChris    110
Quote:
Original post by macnihilist
On a side Note:
Quote:

The standard way would be to multiply the ambient occlusion with the lighting


In general only the _ambient_ light is attenuated by _ambient_ occlusion (although I know there can be reasons to do it otherwise). What you are doing sounds like _directional_ occlusion for direct lighting. Here is a relatively recent screen space approach. Maybe it can help you to improve your method.


I know that ambient occlusion should only be applied to the ambient term. But that just doesn't make any sense from a physical standpoint. It gets even worse when the graphics heavily rely on directional lights and mostly have no ambient term. This is what bothered me and so I came up with my technique that is able to apply it to all lights without having these problems. By the way, thx for the paper. Since I'm currently looking into improving it by adding some kind of global / self illumination to it, the paper comes in handy :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
    • By fighting_falcon93
      Imagine that we have a vertex structure that looks like this:
      struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; The vertex shader looks like this:
      cbuffer MatrixBuffer { matrix world; matrix view; matrix projection; }; struct VertexInput { float4 position : POSITION; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInput main(VertexInput input) { PixelInput output; input.position.w = 1.0f; output.position = mul(input.position, world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.color = input.color; return output; } And the pixel shader looks like this:
      struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; float4 main(PixelInput input) : SV_TARGET { return input.color; } Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex B. vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f); vertices[1].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex D. vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f); vertices[3].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // 1st triangle. indices[0] = 0; // Vertex A. indices[1] = 3; // Vertex D. indices[2] = 2; // Vertex C. // 2nd triangle. indices[3] = 0; // Vertex A. indices[4] = 1; // Vertex B. indices[5] = 3; // Vertex D. This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles:

      Now imagine that we’d want our quad to have a different color in vertex A:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C:
      // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B.
      Which brings us to my question:
      I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles?
      An illustration of what I'm trying to achieve is shown in the image below:

       
      Background
      This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem.
      I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape:

      As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.
    • By -Tau-
      Hello, I'm close to releasing my first game to Steam however, my game keeps failing the review process because it keeps crashing. The problem is that the game doesn't crash on my computer, on my laptop, on our family computer, on fathers laptop and i also gave 3 beta keys to people i know and they said the game hasn't crashed.
      Steam reports that the game doesn't crash on startup but few frames after a level has been started.
      What could cause something like this? I have no way of debugging this as the game works fine on every computer i have.
       
      Game is written in C++, using DirectX 11 and DXUT framework.
    • By haiiry
      I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
      void extractBitmap(void* texture) { if (texture) { ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture; ID3D11Texture2D* pNewTexture = NULL; D3D11_TEXTURE2D_DESC desc; d3dtex->GetDesc(&desc); desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB; HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture); if (FAILED(hRes)) { printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str()); if (hRes == DXGI_ERROR_DEVICE_REMOVED) printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str()); } else { if (pNewTexture) { D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); // Wokring with texture pNewTexture->Release(); } } } return; } D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer)); extractBitmap(pBackBuffer); pBackBuffer->Release(); Crash log:
      CreateTexture2D FAILED:887a0005 DXGI_ERROR_DEVICE_REMOVED -- 887a0020 Once I comment out 
      D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); 
      code works fine on all 3 PC's.
    • By Fluffy10
      Hi i'm new to this forum and was wondering if there are any good places to start learning directX 11. I bought Frank D Luna's book but it's really outdated and the projects won't even compile. I was excited to start learning from this book because it gives detailed explanations on the functions being used as well as the mathematics. Are there any tutorials / courses /books that are up to date which goes over the 3D math and functions in a detailed manner? Or where does anyone here learn directX 11? I've followed some tutorials from this website http://www.directxtutorial.com/LessonList.aspx?listid=11 which did a nice job but it doesn't explain what's happening with the math so I feel like I'm not actually learning, and it only goes up until color blending. Rasteriks tutorials doesn't go over the functions much at all or the math involved either. I'd really appreciate it if anyone can point me in the right direction, I feel really lost. Thank you
  • Popular Now