Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
  2. Based upon where my how far back from the camera is there will be a artifact of my point light. Here is what I mean before going into details I have a flat rectangle and a standing rectangle of 20x20 dimensions with a point light of 22 radius. My camera currently sits at -76 but if I were to move the camera to -75 then even in the most extreme example of point light z = 3.0 from above will no longer have the artifact. I am not sure what could be causing it but I have tried several idea to avail. The one with the biggest benefit seemed to be scaling my world matrix for rendering the point lights sphere by a small modifier like 1.1 in addition to the radius.That seemed to only mask the issue for a little bit. When looking at the render targets in RenderDoc going into the lighting pass the render targets seem correct so my guess is that it is my shading code. I am not sure what other details would be beneficial to detailing the problem but if there is something mentioned I will post it. Shader Code Texture2D NormalMap : register( t0 ); Texture2D DiffuseAlbedoMap : register( t1 ); Texture2D SpecularAlbedoMap : register( t2 ); Texture2D PositionMap : register(t3); cbuffer WorldViewProjCB : register( b0 ) { matrix WorldViewProjMatrix; matrix WorldViewMatrix; } cbuffer CameraPosition : register ( b2 ) { float3 CameraPosition; } cbuffer LightInfo : register ( b3 ) { float3 LightPosition; float3 LightColor; float3 LightDirection; float2 SpotLightAngles; float4 LightRange; }; struct VertexShaderInput { float4 Position : POSITION; }; struct VertexShaderOutput { float4 PositionCS : SV_Position; float3 ViewRay : VIEWRAY; }; VertexShaderOutput VertexShaderFunction(in VertexShaderInput input) { VertexShaderOutput output; output.PositionCS = mul( input.Position, WorldViewProjMatrix ); float3 positionVS = mul( input.Position, WorldViewMatrix ).xyz; output.ViewRay = positionVS; return output; } void GetGBufferAttributes(in float2 screenPos, out float3 normal, out float3 position, out float3 diffuseAlbedo, out float3 specularAlbedo, out float specularPower) { int3 sampleIndices = int3(screenPos.xy, 0); normal = NormalMap.Load(sampleIndices).xyz; position = PositionMap.Load(sampleIndices).xyz; diffuseAlbedo = DiffuseAlbedoMap.Load(sampleIndices).xyz; float4 spec = SpecularAlbedoMap.Load(sampleIndices); specularAlbedo = spec.xyz; specularPower = spec.w; } float3 CalcLighting(in float3 normal, in float3 position, in float3 diffuseAlbedo, in float3 specularAlbedo, in float specularPower) { float3 L = 0; float attenuation = 1.0f; L = LightPosition - position; float dist = length(L); attenuation = max(0, 1.0f - (dist / LightRange.x)); L /= dist; float nDotL = saturate(dot(normal, L)); float3 diffuse = nDotL * LightColor * diffuseAlbedo; float3 V = CameraPosition - position; float3 H = normalize( L + V); float3 specular = pow(saturate(dot(normal, H)), specularPower) * LightColor * specularAlbedo.xyz * nDotL; return (diffuse + specular) * attenuation; } float4 PixelShaderFunction( in float4 screenPos : SV_Position ) : SV_Target0 { float3 normal; float3 position; float3 diffuseAlbedo; float3 specularAlbedo; float specularPower; GetGBufferAttributes(screenPos.xy, normal, position, diffuseAlbedo, specularAlbedo, specularPower); float3 lighting = CalcLighting(normal, position, diffuseAlbedo, specularAlbedo, specularPower); return float4(lighting, 1.0f); }
  3. Hi all, After seeing some missing pixels along the edges of meshes 'beside each other'/ connected, I first thought it was mesh specific/related. But after testing the following, I still see this occuring: - create a flat grid/plane, without textures - output only black, white background (buffer cleared) When I draw this plane twice, e.g. 0,0 (XZ) and 4,0 (XZ), with the plane being 4x4, I still see those strange pixels 'see through' (aiming for properly connected planes). After quite some researching and testing (disable skybox, change buffer clear color, disable depth buffer etc), I still keep getting this. I've been reading up on 't junction' issues, but I think here this isn't the case (verts all line up nicely). As a workaround/ solution for now, I've just made the black foundation planes (below the scene), boxes with minimal height and bottom triangles removed. This way basically these 'holes' are not visible because the boxes have sides. Here a screenshot of what I'm getting. I wonder if someone has thoughts on this, is this 'normal', do studios workaround this etc. There might be some rounding issues, but I wouldn't expect that with this 'full' numbers (all ,0).
  4. I'm trying to offset the depth value of all pixels written by a HLSL pixel shader, by a constant view-space value (it's used in fighting games like Guilty gear and Street Fighter V to simulate 2d layering effects. I wish to do something similar). The projection matrix is generated in sharpdx using a standard perspective projection matrix (PerspectiveFOVLH, which makes a matrix similar to the one described at the bottom there). My pixel shader looks like this struct PSoutput { float4 color: SV_TARGET; float depth: SV_DEPTH; }; PSoutput PShaderNormalDepth(VOutColorNormalView input) { PSoutput output; output.color = BlinnPhong(input.color, input.normal, input.viewDirection); output.depth = input.position.z; //input.position's just the standard SV_POSITION return output; } This one gives me the exact same results as before I included depth output. Given a view space offset value passed in a constant shader, how do I compute the correct offset to apply from there? EDIT: I've been stuck on this for weeks, but of course a bit after I post it I figure it out, after reading this. So, with a standard projection in clip space position.z really contains D = a * (1/z)+b where b and a are elements 33 and 43 of the projection matrix and z is view space depth. This means the view space depth can be computed with z = a/(D-b). So to add a given view space depth offset in the pixel shader, you do this: float trueZ = projectionMatrix._43 / (input.position.z - projectionMatrix._33); output.depth = projectionMatrix._43 / (trueZ + zOffset) + projectionMatrix._33;
  5. Hi, I wrote my animation importer for Direct3D 11 using assimp and an FBX file exported from Blender and everything is working after I flipped the axes such that Y=Z and Z=-Y. I basically multiplied BoneOffsetMatrix = BoneOffsetMatrix * FlipMatrix and GlobalInverseMatrix = Inverse(FlipMatrix) * GlobalInverseMatrix where FlipMatrix = ( 1,0,0,0, 0,0,1,0, 0,-1,0,0, 0,0,0,1 ) (matrices in row-major format). But why do I have to? There are tutorials (for OpenGL, but still) where this worked fine without this step. Is it a setting in Blender that is wrong? I am applying those transformations to my own vertices, so I'm not using the ones provided in the FBX file. But even if I did, those would be wrong without flipping the axes after the global inverse transformation. Even though it is working, I want to give my editor to modders of my game, so I can't be sure that it works at their ends since I had to add a step that should not be required according to the documentation. Cheers, Magogan
  6. Hey I'm working in an engine. Whenever we run dispatch calls they are much more expensive than draw calls for the drivers on nvidia cards(checked with nsight). Same cost maybe for running 5000 draw calls as a few 100 dispatch. Is this normal?
  7. Hi, sorry for my English. My comp specs are: Win 8.1, DirectX 11.2, Geforce GTX750 Ti with latest drivers. In my project I must use color blend mode max via SDL_ComposeCustomBlendMode which is supported in SDL 2.0.9 by direct3d11 renderer only. Changing defines in SDL_config.h or SDL_config_windows.h (SDL_VIDEO_RENDER_D3D11 to 1 and SDL_VIDEO_RENDER_D3D to 0) doesn't help. SDL says my system supports direct3d, opengl, opengles2 and software renderers. What should I do to activate direct3d11 renderer so I can use blend mode max?
  8. Hello, I'm doing tessellation and while I got the positions setup correctly using quads I am at a loss how to generate smooth normals from them. I suppose this should be done in the Domain shader but how would this process look like? I am using a heightmap for tessellation but I would rather generate the normals from the geometry than use a normal map, if possible. Cheers
  9. Hello all, I have made a simple shadow map shader with a minimal problem on my implementation, I know there's something missing that it draw on polygon not facing the light while it should not, since my shader knowledge on available functions is limited I cannot spot the problem, I put this on DX11 HLSL tag though GLSL and any hints, tips is appreciated and welcome ^_^y IMAGE : CODE: // Shadow color applying //------------------------------------------------------------------------------------------------------------------ //*>> Pixel position in light space // float4 m_LightingPos = mul(IN.WorldPos3D, __SM_LightViewProj); //*>> Shadow texture coordinates // float2 m_ShadowTexCoord = 0.5 * m_LightingPos.xy / m_LightingPos.w + float2( 0.5, 0.5 ); m_ShadowTexCoord.y = 1.0f - m_ShadowTexCoord.y; //*>> Shadow map depth // float m_ShadowDepth = tex2D( ShadowMapSampler, m_ShadowTexCoord ).r; //*>> Pixel depth // float m_PixelDepth = (m_LightingPos.z / m_LightingPos.w) - 0.001f; //*>> Pixel depth in front of the shadow map depth then apply shadow color // if ( m_PixelDepth > m_ShadowDepth ) { m_ColorView *= float4(0.5,0.5,0.5,0); } // Final color //------------------------------------------------------------------------------------------------------------------ return m_ColorView;
  10. I was wondering if anyone knows of any tools to help design procedural textures. More specially I need something that will output the actual procedure rather than just the texture. It could output HLSL or some pseudo code that I can port to HLSL. The important thing is I need the algorithm, not just the texture so I can put it into a pixel shader myself. I posted the question on the Allegorithmic forum, but someone answered that while Substance Designer uses procedures internally, it doesn't support output of code, so I guess that one is out.
  11. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  12. Hi, I am doing some hobby project where I am trying to make a game work with Oculus. I do not have game's source code, but I believe it supplies Open VR API with DX10 texture (game itself is DX9, I believe internally they convert DX9 texture to DX10 for submission to Steam VR/Open VR). I originally tried to do everything in DX10 on my side, but I don't think Oculus supports DX10. So, now I'd like to try converting DX10 texture to DX11 and try using that in Oculus' Texture Swap Chain. I've couple of questions: * could someone suggest a way to convert DX10 texture to DX11? I am going to try using techniques described here https://docs.microsof t.com/en-us/windows/desktop/direct3darticles/surface-sharing-between-windows-graphics-apis but I am not a DX guru, in fact I am not a graphics programmer and last time I touched D3D was 15 years ago , and if someone could provide me with simpler approach, that'd be very helpful Or, at lease, could someone clarify if article I referenced is indeeded what I need to do? * given texture pointer, how can I figure out if it is indeed DX10 texture? I was able to get Description, and it seems to be filled reasonably: ID3D10Texture2D *src = (ID3D10Texture2D*)texture->handle; D3D10_TEXTURE2D_DESC srcDesc; src->GetDesc(&srcDesc); but how could i for example, tell if it is DX10 or DX10.1 texture? * looks like I will have to instantiate DX11 device myself. Is there any harm in having multiple D3D11 device instantiated (per swap chain) of do I need to share single device? Thanks for your help.
  13. Hello, I want to upgrade my program from D3D9 to D3D11, since I need more power in the newer API, the unfortunate thing is D3D11 no longer supports the ID3DXMesh interface, and I need to supply a fvf and a D3D9 device to it in order to create one, the D3D9 device is the thing I don't have when I start the application as a D3D11 application, of course. And I don't have the fvf handy. However, I can't live without a ID3DXMesh interface, because my original program is totally driven by it. how do I get it work again? thanks Jack
  14. I call CopyResource to get some data from the GPU to the CPU. If I try to Map immediately I obviously suffer performance hit. So I have a round buffer and I call Map delayed by, say, 3-4 frames. This works way better. Now my question is: how do I know after how many frames I can safely Map without incurring additional performance hit? I have noticed for instance that when I delay by one more frame I get linearly better performance, but after about 5-6 frames, the more frames I wait, the performance stays the same. Is there a way to "Query" DX11 to know when it's "safe" to Map a resource?
  15. Hi, I am trying to initialize SkyBox/Cubemap in Oculus app. Oculus's sample shows how to initialize texture swap chain texture. The difference in my use case is that I do not initialize from .dds bytes read from disk, I already have ID3D11Texture2D. I see samples online that allow getting Texture bytes, but that will involve CPU copying of memory, I wonder if it can be avoided. Here's roughly what I am doing: int numFaces = 6; for (int i = 0; i < numFaces; ++i) { ID3D11Texture2D *faceSrc = textures->handle; ++textures; context->UpdateSubresource(tex, i, nullptr, (const void*)faceSrc, srcDesc.Width * 4, srcDesc.Width * srcDesc.Height * 4); } However, that crashes with AV in Nvidia driver. Any suggestions? Thanks!
  16. Hi guys, Should I use MSAA surfaces (and corresponding depth buffers) when drawing data like positions, normals, and depth to use in stuff like SSAO ? And what about when drawing the SSAO itself ? I'm doing forward rendering for the scene and using MSAA for that.
  17. I'm getting an odd problem with DX11. I kind of solved it but I don't understand why it didn't work the first way. What I'm trying to do is create a bunch of meshes (index and vertex buffers) in one thread but render them in a second thread. I don't don't render the same meshes I'm currently creating. I build a whole set of new meshes and then when everything is ready, the build thread tells the render thread to swap to the new set. This worked most of the time except for once in a while one of the meshes would be corrupted. It was definitely the mesh generation or copy, and not the render because a corrupted mesh would stick around until the next mesh update, then it would disappear. At first I thought it might be in my CPU side mesh generation code. I build meshes in my own mesh format and then I translate and copy straight to DX11 using ID3D11DeviceContext::map. I am aware that the device context is not thread safe so I guard it with a mutex to make sure I'm not trying to use it in two threads at the same time. Before I did this the program with would simply crash. But afterwards I would only get occasional mesh corruption. Finally just to try something else I put a mutex around the whole scene render code and then used that same mutex in the other thread around the CPU to DX11 mesh copy section. This solved the problem. However I don't understand why I should be forced to so this since I was protecting the graphics context before. Is there something I'm missing here? Should I even be calling DX11 from more than one thread? Supposedly it's thread safe except for the graphics context.
  18. I have a process that creates D3D11 shared texture accoring to https://docs.microsoft.com/en-us/window ... phics-apis we can open in Direct3D9Ex a shared textures previously created by non-DX9 APIs The texture has DXGI_FORMAT_B8G8R8A8_UNORM format I'm trying to open it like that: D3DDevice->CreateTexture(width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture, &shared_handle) (usege=D3DUSAGE_RENDERTARGET makes no difference) but it says:"Direct3D9: (ERROR) :Opened and created resources don't match, unable to open the shared resource. any thoughts?
  19. Some computer configurations have multiple GPUs, e.g. gaming laptop with an Intel HD Graphics chip and a GeForce or Radeon chip. When enumerating through the available adapters on a computer, the Intel chip is the first one in most cases. However, I want to use the best adapter for my game. So I wrote the code below to create my swap chain: protected void CreateDevice(Size clientSize, IntPtr outputHandle) { ProcessLogger.Instance.StartFunction(this, "CreateDevice"); // Set swap chain flags, DXGI format and default refresh rate. _swapChainFlags = SharpDX.DXGI.SwapChainFlags.None; _dxgiFormat = SharpDX.DXGI.Format.R8G8B8A8_UNorm; SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); // Get proper video adapter and create device and swap chain. using (var factory = new SharpDX.DXGI.Factory1()) { SharpDX.DXGI.Adapter adapter = GetAdapter(factory); if (adapter != null) { ProcessLogger.Instance.Write(String.Format("Selected adapter: {0}", adapter.Description.Description)); // Get refresh rate. refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate); ProcessLogger.Instance.Write(String.Format("Selected refresh rate = {0}/{1} ({2})", refreshRate.Numerator, refreshRate.Denominator, refreshRate.Numerator / refreshRate.Denominator)); // Create Device and SwapChain ProcessLogger.Instance.Write("Create device."); _device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); ProcessLogger.Instance.Write("Create swap chain."); _swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate)); ProcessLogger.Instance.Write("Store device context."); _deviceContext = _device.ImmediateContext; } } ProcessLogger.Instance.EndFunction(this, "CreateDevice"); } For this function to work properly, I have to select the proper adapter in GetAdapter. Here is what it looks like: private SharpDX.DXGI.Adapter GetAdapter(SharpDX.DXGI.Factory1 factory) { List<SharpDX.DXGI.Adapter> adapters = new List<SharpDX.DXGI.Adapter>(); for (int i = 0; i < factory.GetAdapterCount(); i++) { SharpDX.DXGI.Adapter adapter = factory.GetAdapter(i); if (SharpDX.Direct3D11.Device.IsSupportedFeatureLevel(adapter, SharpDX.Direct3D.FeatureLevel.Level_10_1)) adapters.Add(adapter); } try { foreach (var adapter in adapters) if (adapter.Description.Description != null && (adapter.Description.Description.Contains("GeForce") || adapter.Description.Description.Contains("Radeon"))) { return adapter; } } catch { } return adapters.First(); } So all I am doing is asking my Factory1 for a list of all adapters that support FeatureLevel 10.1 and search for a "GeForce" or "Radeon" one. Very simple and use that one. This works like a charm. HOWEVER, there is one big problem: When I use this code in the Release build the game crashes when going to fullscreen using the code below. public void SetFullscreenState(bool isFullscreen) { if (isFullscreen != _swapChain.IsFullScreen) _swapChain.SetFullscreenState(isFullscreen, null); } The error code is DXGI_ERROR_UNSUPPORTED and after doing some research I found out, that this problem only happens for the Release build but not for the Debug build. The Debug build works like a charm. It also crashes only, if the selected adapter is not the first one in the list. So, if I use the first adapter (factory.GetAdapter(0)), it works! If I change my computer settings so that my GeForce is used as primary adapter for my game, it works. It only does not work for the Release build, if the selected adapter is not the first adapter in the list and I can't figure out why... The problem is independent from the used screen or other running applications in a multi-windowed application. I already tested that.
  20. Hello, I recently found out about the "#line" directive in HLSL. Because I handle #include's manually, the line numbers in shader compilation errors are incorrect and dynamically adding these "#line" directives while loading the shader solves this problem which saves me a lot of time as I know precisely where to look when I make an error. However I noticed when I enable this, I can no longer debug shaders using Visual Studio Graphics Debugger. f I want to debug eg. the vertex shader, it asks me for the source file (while the right source file is there in the background! See the image). is there some kind of bug with Visual Studio or is this just an annoying side-effect from using "#line"? Anyone has experience with this? Cheers!
  21. Is it reasonable to use Direct2D for some small 2D games? I never did too much of Direct2D stuff, mostly I used it for displaying text/2D GUI for Direct3D engine etc. but I never tried doing game in it. Is it better to use Direct2D and sprites or would you prefer to go with D3D but with 2D shaders // is D2D not meant for games, no matter how big or small, at all?
  22. Hello, I have regular matrix-based skinning on the GPU working for quite a while now and I stumbled upon an implementation of Dual Quaternion skinning. I've had a go at implementing this in a shader and after spending a lot of time on making small changes to the formulas to make it work, I sort of got it working but there seems to be an issue when blending bones. I found this pretty old topic on GameDev.net ( ) which, I think, describes my problem pretty well but I haven't been able to find the problem. Like in that post, if the blendweight of a vertex is 1, there is no problem. Once there is blending, I get artifacts Just for the sake of just focussing on the shader side of things first, I upload the dual quaternions to GPU which are converted from regular matrices (because I knew they should work). Below an image comparison between matrix skinning (left) and dual quaternion skinning (right): As you can see, especially on the shoulders, there are some serious issues. It might be because of a silly typo however I'm surprised some parts of the mesh look perfectly fine. Below some snippets: //Blend bones float2x4 BlendBoneTransformsToDualQuaternion(float4 boneIndices, float4 boneWeights) { float2x4 dual = (float2x4)0; float4 dq0 = cSkinDualQuaternions[boneIndices.x][0]; for(int i = 0; i < MAX_BONES_PER_VERTEX; ++i) { if(boneIndices[i] == -1) { break; } if(dot(dq0, cSkinDualQuaternions[boneIndices[i]][0]) < 0) { boneWeights[i] *= -1; } dual += boneWeights[i] * cSkinDualQuaternions[boneIndices[i]]; } return dual / length(dual[0]); } //Used to transform the normal/tangent float3 QuaternionRotateVector(float3 v, float4 quatReal) { return v + 2.0f * cross(quatReal.xyz, quatReal.w * v + cross(quatReal.xyz, v)); } //Used to transform the position float3 DualQuatTransformPoint(float3 p, float4 quatReal, float4 quatDual) { float3 t = 2 * (quatReal.w * quatDual.xyz - quatDual.w * quatReal.xyz + cross(quatDual.xyz, quatReal.xyz)); return QuaternionRotateVector(p, quatReal) + t; } I've been staring at this for quite a while now so the solution might be obvious however I fail to see it. Help would be hugely appreciated Cheers
  23. So I'm getting to the point in my engine design that I'm trying to implement some dynamic menus. It's going well, until I've hit a recent conceptual stumbling block that I'm not how to sure to get around. It has to do with drawing resizable, internally scrollable windows inside each monitor. Some background first. I'm creating a multi-monitor capable game, so I needed to roll my own GUI. The window & menu system works, for fixed sized windows. My menus are all made many sprites, utilizing DirectXTK SpriteBatch, and text via SpriteFont. I load the images using WICTextureLoader. For the case where I have activated elements in a window exceed the size of the window, that's okay. They need to be visible, and there's only one active element at a time. The problem comes with hiding passive elements as they are scrolled out of their parent window, and then drawn outside of it. Also of note: I have a *single* application drawing to all of the monitors, not multiple applications drawing to multiple monitors. So solutions based around options that use the singleton pattern are not workable. (This, sadly, omits most GUI frameworks). I've looked at using viewports, but I can't seem to get them to work properly when I'm dealing with multi-monitor displays, especially when the monitors have different resolutions, and are not aligned perfectly. I'm not even sure where to begin if I have overlapping viewports, ie, overlapping windows. Should I keep struggling with viewports, or should I look into a way of dynamically clipping the images/text/sprites as they are edged out of the windows? If so, what is such a technique called, and where can I read up more on it? Or am I missing a forest for the trees, and there's an easy solution to my problem that I've overlooked?
  24. Hey, I've come across some odd problem. I am using DirectX 11 and I've tested my results on 2 GPUs: Geforce GTX660M and GTX1060. The strange behaviour occurs surprisingly on the newer GPU - GTX1060. I am loading HDR texture into DirectX and creating its shader resource view with DXGI_FORMAT_R32G32B32_FLOAT format: D3D11_SUBRESOURCE_DATA texData; texData.pSysMem = data; //hdr data in as a float array with rgb channels texData.SysMemPitch = width * (4 * 3);//size of texture row in bytes (4 bytes per each channel rgb) DXGI_FORMAT format = DXGI_FORMAT_R32G32B32_FLOAT; //the remaining (not set below) attributes have default DirectX values Texture2dConfigDX11 conf; conf.SetFormat(format); conf.SetWidth(width); conf.SetHeight(height); conf.SetBindFlags(D3D11_BIND_SHADER_RESOURCE); conf.SetCPUAccessFlags(0); conf.SetUsage(D3D11_USAGE_DEFAULT); D3D11_TEX2D_SRV srv; srv.MipLevels = 1; srv.MostDetailedMip = 0; ShaderResourceViewConfigDX11 srvConf; srvConf.SetFormat(format); srvConf.SetTexture2D(srv); I'm sampling this texture using linear sampler with D3D11_FILTER_MIN_MAG_MIP_LINEAR and addressing mode: D3D11_TEXTURE_ADDRESS_CLAMP. This is how I sample the texture in a pixel shader: SamplerState linearSampler : register(s0); Texture2D tex; ... float4 psMain(in PS_INPUT input) : SV_TARGET { float3 color = tex.Sample(linearSampler, input.uv).rgb; return float4(color, 1); } First of all, I'm not getting any errors during runtime in release and my shader using this texture gives correct result on both GPUs. In debug mode I'm also getting correct results on both GPUs but I'm also getting following DX error (in output log in Visual Studio) when debugging the app and only on the GTX1060 GPU: D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The Shader Resource View in slot 0 of the Pixel Shader unit is using the Format (R32G32B32_FLOAT). This format does not support 'Sample', 'SampleLevel', 'SampleBias' or 'SampleGrad', at least one of which may being used on the Resource by the shader. The exception is if the corresponding Sampler object is configured for point filtering (in which case this error can be ignored). This also only applies if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #371: DEVICE_DRAW_RESOURCE_FORMAT_SAMPLE_UNSUPPORTED] Despite this error, the result of the shader is correct... This doesn't seem to make any sense... Is this possible that my graphics driver (I updated to the newest version) on GTX1060 doesn't support sampling R32G32B32 textures in pixel shader? This sounds like pretty basic functionality to support... R32G32B32A32 format works flawlessly in debug/release on both GPUs.
  25. Hello Guys, I'm using scharpDX 4, and wonder why the matrix class uses row_major matrixes, because HLSL uses column_major Can anyone tell me if there is a specific reason why the matrix class uses exactly the wrong matrix order?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!