Sign in to follow this  
rouncED

DX11 deferred rendering and tesselation

Recommended Posts

so, I plan on using deferred rendering, but I also want dx11 tesselation.

Maps I need are depth maps, normal maps, diffuse maps, and coordinate maps.

I figure I could do 4 passes of the actual geometry to get front faces, back faces of a worldspace coordinate render and diffuse render, then I could make the normal and depth maps from the worldspace coordinate render in image space.

I would then have all the maps I needed... only problem is, it took 4 whole renders to get there, that means I had to tesselate the whole scene over 4 times to do it...

Is there some way to do the tesselation once and get colour and coordinate maps at the same time?

Even maybe front faces and back faces?

Share this post


Link to post
Share on other sites
Usually, depth (position), normal and diffuse are all rendered at the same time using MRT (multiple render-targets). This gives you a back-buffer with many layers, so you can write diffuse to one layer, normal to another, etc...

Why do you need to render out both the front and the back faces for?

Share this post


Link to post
Share on other sites
Ah yes! So there is a way!
Thanks alot for telling me this... so you set each render target to its own pixel shader, and you tesselate "once" and you get multiple effects out of the single render.

Hodgeman, I need back faces to compute my ambient occlusion algorythm (which is more realistic than ssao, but requires more work) and sub surface scattering, they both need back faces.

I have another question tho, and im still pretty far from understanding this properly...
Can even the vertex shader be different between render targets and you would still only tesselate once?

Cause id still like to twist the idea to somehow get back faces out of it, and also, if you could do that, you could get a shadow map render out of it as well, which is also very important.

Share this post


Link to post
Share on other sites
You don't need backfaces for subsurface scattering... You need translucent shadow maps... And if you swear that you won't tell anyone about it, I could give you my new ambient occlusion technique that improves the efficiency of both SSAO and baked AO in a way that they are both superior than PRT without any noticeable additional cost (fps stays the same).
Also take a look at the Light Pre-Pass technique... It requires way less RAM than deferred rendering... And you won't even need a buffer for albedo (textures)...

The multiple render targets are only affected by the pixel shader... You just need to calculate multiple colors in the pixel shader and define a PS_Out struct with multiple colors (COLOR0, COLOR1, ...)

[Edited by - DarkChris on October 16, 2010 7:05:51 AM]

Share this post


Link to post
Share on other sites
Oh thats a bummer, only different pixel shader methods are allowed with MRT...

Even if you used translucent shadow maps, youd still have to render more than once.

Im interested, id like to check out your ao technique, got any screen shots or movies?

If its less quality than this im sorta not interested tho, cause I want quality over speed.


Since id like back faces im sorta stuck, cause even front facing polys would tesselate to create back facing triangles, so I dont know what to do - is back face culling done before or after tesselation?

Share this post


Link to post
Share on other sites
Quote:
Original post by rouncED
Oh thats a bummer, only different pixel shader methods are allowed with MRT...


Since when, it works with shader model 1, 2 and 3. I can't think of an explanation why they should've changed it for shader model 5?!

Quote:
Original post by rouncED
Even if you used translucent shadow maps, youd still have to render more than once.


But using the backfaces is not how it's meant to work. You have to use a ray that goes from your surfaces point through the object in the direction of the light, not in the view direction.

Quote:
Original post by rouncED
Im interested, id like to check out your ao technique, got any screen shots or movies?


The technique is not meant to be used instead of yours. It's meant to improve yours even more. It can easily be implemented in any engine and provides more realistic lighting / shadowing and not better ambient occlusion. I'm currently trying to improve it even more by adding a global illumination effect, but that drastically decreases the fps (compared to almost 0fps of the standard implementation).

Here are a few pics:



And a vid:
Youtube - Unlimited Engine - Shadow Casting Ambient Occlusion

I don't know if I want to release the implementation yet, since I don't know how and if I can take any advantages from it. I definitely will release it at some point in the future.

Share this post


Link to post
Share on other sites
I see it goes really fast, but the ambient occlusion isnt as good as mine, yours looks harder than raytraced, my image is a lot softer. But probably yours is more useful, cause who wants to play a game at 12fps at 640 480... i know... i know...

Share this post


Link to post
Share on other sites
Quote:
Original post by rouncED
so you set each render target to its own pixel shader, and you tesselate "once" and you get multiple effects out of the single render.
You can't set a render target to a pixel shader...
When you render geometry, it uses a pixel shader.
Normally, the pixel shader outputs a single (float4) value, which is stored in the render-target.
With MRT, your pixel shaders can output more than one value (e.g. 4 x float4's), which are stored in the many layers of the render-target.

Share this post


Link to post
Share on other sites
You can always use Stream Out to reuse the tessellated vertices with different pixels shaders.
Although that may beat the point of using a tessellator, because it's main purpose is to increase geometry detail without memory bandwidth and cache inefficiency problems caused by large vertex buffers.
On the bright side, using stream out you're only using the VS once. The net benefit can only be found through experimentation

Cheers and good luck
Dark Sylinc

Share this post


Link to post
Share on other sites
Quote:
Original post by rouncED
I see it goes really fast, but the ambient occlusion isnt as good as mine, yours looks harder than raytraced, my image is a lot softer. But probably yours is more useful, cause who wants to play a game at 12fps at 640 480... i know... i know...


It's nothing that would replace your ambient occlusion. But it improves the way of how the ambient occlusion gets applied to the lighting. Mine uses a pretty shitty baked ambient occlusion map (Blender is so bad) and absolutely no SSAO or shadow mapping. The standard way would be to multiply the ambient occlusion with the lighting which is referred in my pictures as "standard ambient occlusion". This causes some areas to be unlittable even if a light directly shines onto these areas. And my SCAO not only solves this, it also adds dynamic soft shadows for an infinite amount of lights without the need for shadow maps. It in no way tries to outshine your AO technique, it would just make it more efficient in combination with real lights.

Share this post


Link to post
Share on other sites
On a side Note:
Quote:

The standard way would be to multiply the ambient occlusion with the lighting


In general only the _ambient_ light is attenuated by _ambient_ occlusion (although I know there can be reasons to do it otherwise). What you are doing sounds like _directional_ occlusion for direct lighting. Here is a relatively recent screen space approach. Maybe it can help you to improve your method.

Share this post


Link to post
Share on other sites
Quote:
Original post by macnihilist
On a side Note:
Quote:

The standard way would be to multiply the ambient occlusion with the lighting


In general only the _ambient_ light is attenuated by _ambient_ occlusion (although I know there can be reasons to do it otherwise). What you are doing sounds like _directional_ occlusion for direct lighting. Here is a relatively recent screen space approach. Maybe it can help you to improve your method.


I know that ambient occlusion should only be applied to the ambient term. But that just doesn't make any sense from a physical standpoint. It gets even worse when the graphics heavily rely on directional lights and mostly have no ambient term. This is what bothered me and so I came up with my technique that is able to apply it to all lights without having these problems. By the way, thx for the paper. Since I'm currently looking into improving it by adding some kind of global / self illumination to it, the paper comes in handy :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628278
    • Total Posts
      2981789
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
    • By mister345
      Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black!
      Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong, 
      but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look
      at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea.
          // Setup the description of the texture.
          textureDesc.Height = height;
          textureDesc.Width = width;
          textureDesc.MipLevels = 0;
          textureDesc.ArraySize = 1;
          textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
          textureDesc.SampleDesc.Count = 1;
          textureDesc.SampleDesc.Quality = 0;
          textureDesc.Usage = D3D11_USAGE_DEFAULT;
          textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
          textureDesc.CPUAccessFlags = 0;
          textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
      Please help, thanks.
      https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Texture.cpp
       
    • By GameDevCoder
      I have to learn DirectX for a course I am studying. This book https://www.amazon.co.uk/Introduction-3D-Game-Programming-Directx/dp/1936420228 I felt would be great for me to learn from.
      The trouble is the examples which are all offered here http://www.d3dcoder.net/d3d11.htm . They do not work for me. This is a known issue as there is a link on the examples page saying how to fix it. I'm having difficulty with doing this though. This is the page with the solution http://www.d3dcoder.net/Data/Book4/d3d11Win10.htm.
      The reason why this problem is happening, the book was released before Windows 10 was released. Now when the examples are run they need slight fixes in order for them to even work. I just can't get these examples working at all.
      Would anyone be able to help me get the examples working please. I am running Windows 10 also just to make this clear, so this is why the examples are experiencing the not so desired behaviour. I just wish they would work straight away but there seems to be issues with the examples from this book mainly because of it trying to run from a Windows 10 OS.
      On top of this, if anyone has any suggestions with how I can learn DirectX 11 i would be most grateful. Thanks very much. I really would like to get them examples working to though from the book I mentioned.
      Look forward to reading any replies this thread receives.
       
      GameDevCoder.


      PS - If anyone has noticed. I asked this about 1 year ago also but this was when I was dabbling in it. Now I am actually needing to produce some stuff with DirectX so I have to get my head round this now. I felt at the time that I sort of understood what was being written to me in response to my thread back then. I had always been a little unsure though of being absolutely sure of what was happening with these troublesome examples. So I am really just trying to get to the bottom of this now. If anyone can help me work these examples out so I can see them working then hopefully I can learn DirectX 11 from them.
       
      *SOLUTION* - I was able to get the examples running thanks to the gamedev.net community. Great work guys. I'm so please now that I can learn from this book now I have the examples running.
      https://www.gamedev.net/forums/topic/693437-i-need-to-learn-directx-the-examples-for-introduction-to-3d-programming-with-directx-11-by-frank-d-luna-does-not-work-can-anyone-help-me/?do=findComment&comment=5363013
    • By DiligentDev
      Hello!
      I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. It also supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. The engine contains shader source code converter that allows shaders authored in HLSL to be translated to GLSL.
      The engine currently supports Direct3D11, Direct3D12, and OpenGL/GLES on Win32, Universal Windows and Android platforms.
      API Basics
      Initialization
      The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
      #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
      Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
      Creating Shaders
      To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
      SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] =  {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object
      To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
      // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
      // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
      m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources
      Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
       
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
      Setting the Pipeline State and Invoking Draw Command
      Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
      // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Build Instructions
      Please visit this page for detailed build instructions.
      Samples
      The engine contains two graphics samples that demonstrate how the API can be used.
      AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. It can also be thought of as Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced one. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. 

       
      The engine also includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. 

      Integration with Unity
      Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

       
  • Popular Now