• Advertisement
Sign in to follow this  

DX11 deferred rendering and tesselation

This topic is 2684 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

so, I plan on using deferred rendering, but I also want dx11 tesselation.

Maps I need are depth maps, normal maps, diffuse maps, and coordinate maps.

I figure I could do 4 passes of the actual geometry to get front faces, back faces of a worldspace coordinate render and diffuse render, then I could make the normal and depth maps from the worldspace coordinate render in image space.

I would then have all the maps I needed... only problem is, it took 4 whole renders to get there, that means I had to tesselate the whole scene over 4 times to do it...

Is there some way to do the tesselation once and get colour and coordinate maps at the same time?

Even maybe front faces and back faces?

Share this post


Link to post
Share on other sites
Advertisement
Usually, depth (position), normal and diffuse are all rendered at the same time using MRT (multiple render-targets). This gives you a back-buffer with many layers, so you can write diffuse to one layer, normal to another, etc...

Why do you need to render out both the front and the back faces for?

Share this post


Link to post
Share on other sites
If you're doing it right, you would only need the depth and the normals. And both of these should be rendered in just a single pass by using multiple render targets...

Share this post


Link to post
Share on other sites
Ah yes! So there is a way!
Thanks alot for telling me this... so you set each render target to its own pixel shader, and you tesselate "once" and you get multiple effects out of the single render.

Hodgeman, I need back faces to compute my ambient occlusion algorythm (which is more realistic than ssao, but requires more work) and sub surface scattering, they both need back faces.

I have another question tho, and im still pretty far from understanding this properly...
Can even the vertex shader be different between render targets and you would still only tesselate once?

Cause id still like to twist the idea to somehow get back faces out of it, and also, if you could do that, you could get a shadow map render out of it as well, which is also very important.

Share this post


Link to post
Share on other sites
You don't need backfaces for subsurface scattering... You need translucent shadow maps... And if you swear that you won't tell anyone about it, I could give you my new ambient occlusion technique that improves the efficiency of both SSAO and baked AO in a way that they are both superior than PRT without any noticeable additional cost (fps stays the same).
Also take a look at the Light Pre-Pass technique... It requires way less RAM than deferred rendering... And you won't even need a buffer for albedo (textures)...

The multiple render targets are only affected by the pixel shader... You just need to calculate multiple colors in the pixel shader and define a PS_Out struct with multiple colors (COLOR0, COLOR1, ...)

[Edited by - DarkChris on October 16, 2010 7:05:51 AM]

Share this post


Link to post
Share on other sites
Oh thats a bummer, only different pixel shader methods are allowed with MRT...

Even if you used translucent shadow maps, youd still have to render more than once.

Im interested, id like to check out your ao technique, got any screen shots or movies?

">If its less quality than this im sorta not interested tho, cause I want quality over speed.


Since id like back faces im sorta stuck, cause even front facing polys would tesselate to create back facing triangles, so I dont know what to do - is back face culling done before or after tesselation?

Share this post


Link to post
Share on other sites
@DarkChris: What is this technique you speak of and why the secrecy? ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by rouncED
Oh thats a bummer, only different pixel shader methods are allowed with MRT...


Since when, it works with shader model 1, 2 and 3. I can't think of an explanation why they should've changed it for shader model 5?!

Quote:
Original post by rouncED
Even if you used translucent shadow maps, youd still have to render more than once.


But using the backfaces is not how it's meant to work. You have to use a ray that goes from your surfaces point through the object in the direction of the light, not in the view direction.

Quote:
Original post by rouncED
Im interested, id like to check out your ao technique, got any screen shots or movies?


The technique is not meant to be used instead of yours. It's meant to improve yours even more. It can easily be implemented in any engine and provides more realistic lighting / shadowing and not better ambient occlusion. I'm currently trying to improve it even more by adding a global illumination effect, but that drastically decreases the fps (compared to almost 0fps of the standard implementation).

Here are a few pics:



And a vid:
">Youtube - Unlimited Engine - Shadow Casting Ambient Occlusion

I don't know if I want to release the implementation yet, since I don't know how and if I can take any advantages from it. I definitely will release it at some point in the future.

Share this post


Link to post
Share on other sites
I see it goes really fast, but the ambient occlusion isnt as good as mine, yours looks harder than raytraced, my image is a lot softer. But probably yours is more useful, cause who wants to play a game at 12fps at 640 480... i know... i know...

Share this post


Link to post
Share on other sites
Quote:
Original post by rouncED
so you set each render target to its own pixel shader, and you tesselate "once" and you get multiple effects out of the single render.
You can't set a render target to a pixel shader...
When you render geometry, it uses a pixel shader.
Normally, the pixel shader outputs a single (float4) value, which is stored in the render-target.
With MRT, your pixel shaders can output more than one value (e.g. 4 x float4's), which are stored in the many layers of the render-target.

Share this post


Link to post
Share on other sites
You can always use Stream Out to reuse the tessellated vertices with different pixels shaders.
Although that may beat the point of using a tessellator, because it's main purpose is to increase geometry detail without memory bandwidth and cache inefficiency problems caused by large vertex buffers.
On the bright side, using stream out you're only using the VS once. The net benefit can only be found through experimentation

Cheers and good luck
Dark Sylinc

Share this post


Link to post
Share on other sites
Quote:
Original post by rouncED
I see it goes really fast, but the ambient occlusion isnt as good as mine, yours looks harder than raytraced, my image is a lot softer. But probably yours is more useful, cause who wants to play a game at 12fps at 640 480... i know... i know...


It's nothing that would replace your ambient occlusion. But it improves the way of how the ambient occlusion gets applied to the lighting. Mine uses a pretty shitty baked ambient occlusion map (Blender is so bad) and absolutely no SSAO or shadow mapping. The standard way would be to multiply the ambient occlusion with the lighting which is referred in my pictures as "standard ambient occlusion". This causes some areas to be unlittable even if a light directly shines onto these areas. And my SCAO not only solves this, it also adds dynamic soft shadows for an infinite amount of lights without the need for shadow maps. It in no way tries to outshine your AO technique, it would just make it more efficient in combination with real lights.

Share this post


Link to post
Share on other sites
On a side Note:
Quote:

The standard way would be to multiply the ambient occlusion with the lighting


In general only the _ambient_ light is attenuated by _ambient_ occlusion (although I know there can be reasons to do it otherwise). What you are doing sounds like _directional_ occlusion for direct lighting. Here is a relatively recent screen space approach. Maybe it can help you to improve your method.

Share this post


Link to post
Share on other sites
Quote:
Original post by macnihilist
On a side Note:
Quote:

The standard way would be to multiply the ambient occlusion with the lighting


In general only the _ambient_ light is attenuated by _ambient_ occlusion (although I know there can be reasons to do it otherwise). What you are doing sounds like _directional_ occlusion for direct lighting. Here is a relatively recent screen space approach. Maybe it can help you to improve your method.


I know that ambient occlusion should only be applied to the ambient term. But that just doesn't make any sense from a physical standpoint. It gets even worse when the graphics heavily rely on directional lights and mostly have no ambient term. This is what bothered me and so I came up with my technique that is able to apply it to all lights without having these problems. By the way, thx for the paper. Since I'm currently looking into improving it by adding some kind of global / self illumination to it, the paper comes in handy :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By trojanfoe
      I hope this is the right place to ask questions about DirectXTK which aren't really about graphics, if not please let me know a better place.
      Can anyone tell me why I cannot do this:
      DirectX::SimpleMath::Rectangle rectangle = {...}; RECT rect = rectangle; or
      RECT rect = static_cast<RECT>(rectangle); or
      const RECT rect(m_textureRect); despite Rectangle having the following operator RECT:
      operator RECT() { RECT rct; rct.left = x; rct.top = y; rct.right = (x + width); rct.bottom = (y + height); return rct; } VS2017 tells me:
      error C2440: 'initializing': cannot convert from 'const DirectX::SimpleMath::Rectangle' to 'const RECT' Thanks in advance
    • By isu diss
      I'm trying to duplicate vertices using std::map to be used in a vertex buffer. I don't get the correct index buffer(myInds) or vertex buffer(myVerts). I can get the index array from FBX but it differs from what I get in the following std::map code. Any help is much appreciated.
      struct FBXVTX { XMFLOAT3 Position; XMFLOAT2 TextureCoord; XMFLOAT3 Normal; }; std::map< FBXVTX, int > myVertsMap; std::vector<FBXVTX> myVerts; std::vector<int> myInds; HRESULT FBXLoader::Open(HWND hWnd, char* Filename, bool UsePositionOnly) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) { hr = E_FAIL; MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); } FBXS = FbxScene::Create(FBXM, "REALMS"); if (!FBXS) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); } if (!(FBXI->Import(FBXS))) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); } FbxAxisSystem OurAxisSystem = FbxAxisSystem::DirectX; FbxAxisSystem SceneAxisSystem = FBXS->GetGlobalSettings().GetAxisSystem(); if(SceneAxisSystem != OurAxisSystem) { FbxAxisSystem::DirectX.ConvertScene(FBXS); } FbxSystemUnit SceneSystemUnit = FBXS->GetGlobalSettings().GetSystemUnit(); if( SceneSystemUnit.GetScaleFactor() != 1.0 ) { FbxSystemUnit::cm.ConvertScene( FBXS ); } if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); if (UsePositionOnly) { NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new XMFLOAT3[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j] = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices MyI = (DWORD*)Mesh->GetPolygonVertices();//index array } else { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); int idx = 0; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly triangle) count { for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FBXVTX myVert; int p_index = 3*i+j; int t_index = Mesh->GetTextureUVIndex(i, j); FbxVector4 Vertex = Mesh->GetControlPointAt(p_index);//Gets the control point at the specified index. myVert.Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); FbxVector4 Normal; Mesh->GetPolygonVertexNormal(i, j, Normal); myVert.Normal = XMFLOAT3((float)Normal.mData[0], (float)Normal.mData[1], (float)Normal.mData[2]); FbxVector2 uv = uvVertices->GetAt(t_index); myVert.TextureCoord = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); if ( myVertsMap.find( myVert ) != myVertsMap.end() ) myInds.push_back( myVertsMap[ myVert ]); else { myVertsMap.insert( std::pair<FBXVTX, int> (myVert, idx ) ); myVerts.push_back(myVert); myInds.push_back(idx); idx++; } } } } } } } else { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); } return hr; } bool operator < ( const FBXVTX &lValue, const FBXVTX &rValue) { if (lValue.Position.x != rValue.Position.x) return(lValue.Position.x < rValue.Position.x); if (lValue.Position.y != rValue.Position.y) return(lValue.Position.y < rValue.Position.y); if (lValue.Position.z != rValue.Position.z) return(lValue.Position.z < rValue.Position.z); if (lValue.TextureCoord.x != rValue.TextureCoord.x) return(lValue.TextureCoord.x < rValue.TextureCoord.x); if (lValue.TextureCoord.y != rValue.TextureCoord.y) return(lValue.TextureCoord.y < rValue.TextureCoord.y); if (lValue.Normal.x != rValue.Normal.x) return(lValue.Normal.x < rValue.Normal.x); if (lValue.Normal.y != rValue.Normal.y) return(lValue.Normal.y < rValue.Normal.y); return(lValue.Normal.z < rValue.Normal.z); }  
    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By turanszkij
      Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window.
      But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that?
      Anyone with some experience in this?
    • By osiris_dev
      Hello!
      Have a problem with reflection shader for D3D11:
      1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
      I tried to add this:
      #include <D3D11Shader.h>
      #include <D3Dcompiler.h>
      #include <D3DCompiler.inl>
      #pragma comment(lib, "D3DCompiler.lib")
      //#pragma comment(lib, "D3DCompiler_47.lib")
      As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
      I also find this article http://mattfife.com/?p=470
      where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?
  • Advertisement