Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Hi, sorry for my English. My comp specs are: Win 8.1, DirectX 11.2, Geforce GTX750 Ti with latest drivers. In my project I must use color blend mode max via SDL_ComposeCustomBlendMode which is supported in SDL 2.0.9 by direct3d11 renderer only. Changing defines in SDL_config.h or SDL_config_windows.h (SDL_VIDEO_RENDER_D3D11 to 1 and SDL_VIDEO_RENDER_D3D to 0) doesn't help. SDL says my system supports direct3d, opengl, opengles2 and software renderers. What should I do to activate direct3d11 renderer so I can use blend mode max?
  2. Hello, I'm doing tessellation and while I got the positions setup correctly using quads I am at a loss how to generate smooth normals from them. I suppose this should be done in the Domain shader but how would this process look like? I am using a heightmap for tessellation but I would rather generate the normals from the geometry than use a normal map, if possible. Cheers
  3. Hello all, I have made a simple shadow map shader with a minimal problem on my implementation, I know there's something missing that it draw on polygon not facing the light while it should not, since my shader knowledge on available functions is limited I cannot spot the problem, I put this on DX11 HLSL tag though GLSL and any hints, tips is appreciated and welcome ^_^y IMAGE : CODE: // Shadow color applying //------------------------------------------------------------------------------------------------------------------ //*>> Pixel position in light space // float4 m_LightingPos = mul(IN.WorldPos3D, __SM_LightViewProj); //*>> Shadow texture coordinates // float2 m_ShadowTexCoord = 0.5 * m_LightingPos.xy / m_LightingPos.w + float2( 0.5, 0.5 ); m_ShadowTexCoord.y = 1.0f - m_ShadowTexCoord.y; //*>> Shadow map depth // float m_ShadowDepth = tex2D( ShadowMapSampler, m_ShadowTexCoord ).r; //*>> Pixel depth // float m_PixelDepth = (m_LightingPos.z / m_LightingPos.w) - 0.001f; //*>> Pixel depth in front of the shadow map depth then apply shadow color // if ( m_PixelDepth > m_ShadowDepth ) { m_ColorView *= float4(0.5,0.5,0.5,0); } // Final color //------------------------------------------------------------------------------------------------------------------ return m_ColorView;
  4. Well we're back with a new entry. As usual we have made a lot of bug fixes. The primary new feature though is the separation of graphics, and planet generation in different threads. Here's the somewhat familiar code that was modified from our first entry.... void CDLClient::InitTest2() { this->CreateConsole(); printf("Starting Test2\n"); fflush(stdout); // Create virtual heap m_pHeap = new(MDL_VHEAP_MAX, MDL_VHEAP_INIT, MDL_VHEAP_HASH_MAX) CDLVHeap(); CDLVHeap *pHeap = m_pHeap.Heap(); // Create the universe m_pUniverse = new(pHeap) CDLUniverseObject(pHeap); // Create the graphics interface CDLDXWorldInterface *pInterface = new(pHeap) CDLDXWorldInterface(this); // Camera control double fMinDist = 0.0; double fMaxDist = 3200000.0; double fSrtDist = 1600000.0; // World size double fRad = 400000.0; // Fractal function for world CDLValuatorRidgedMultiFractal *pNV = new(pHeap) CDLValuatorRidgedMultiFractal(pHeap,fRad,fRad/20,2.0,23423098); //CDLValuatorSimplex3D *pNV = new(pHeap) CDLValuatorSimplex3D(fRad,fRad/20,2.0,23423098); // Create world CDLSphereObjectView *pSO = new(pHeap) CDLSphereObjectView( pHeap, fRad, 1.0 , 0.25, 6, pNV ); pSO->SetGraphicsInterface(pInterface); // Create an astral reference from the universe to the world and attach it to the universe CDLReferenceAstral *pRef = new(pHeap) CDLReferenceAstral(m_pUniverse(),pSO); m_pUniverse->PushReference(pRef); // Create the camera m_pCamera = new(pHeap) CDLCameraObject(pHeap, FDL_PI/4.0, this->GetWidth(), this->GetHeight()); m_pCamera->SetGraphicsInterface(pInterface); // Create a world tracking reference from the unverse to the camera m_pBoom = new(pHeap) CDLReferenceFollow(m_pUniverse(),m_pCamera(),pSO,fSrtDist,fMinDist,fMaxDist); m_pUniverse->PushReference(m_pBoom()); // Set zoom speed in the client this->SetZoom(fMinDist,fMaxDist,3.0); // Create the god object (Build point for LOD calculations) m_pGod = new(pHeap) CDLGodObject(pHeap); // Create a reference for the god opbject and attach it to the camera CDLReference *pGodRef = new(pHeap) CDLReference(m_pUniverse(), m_pGod()); m_pCamera->PushReference(pGodRef); // Set the main camera and god object for the universe' m_pUniverse->SetMainCamera(m_pCamera()); m_pUniverse->SetMainGod(m_pGod()); // Load and compile the vertex shader CDLUString clVShaderName = L"VS_DLDX_Test.hlsl"; m_pVertexShader = new(pHeap) CDLDXShaderVertexPC(this,clVShaderName,false,0,1); // Attach the Camera to the vertex shader m_pVertexShader->UseConstantBuffer(0,static_cast<CDLDXConstantBuffer *>(m_pCamera->GetViewData())); // Create the pixel shader CDLUString clPShaderName = L"PS_DLDX_Test.hlsl"; m_pPixelShader = new(pHeap) CDLDXShaderPixelGeneral(this,clPShaderName,false,0,0); // Create a rasterizer state and set to wireframe m_pRasterizeState = new(pHeap) CDLDXRasterizerState(this); m_pRasterizeState->ModifyState().FillMode = D3D11_FILL_WIREFRAME; // Initailze the universe m_pUniverse()->InitFromMainCamera(); // Run the universe! m_pUniverse->Run(); } Right at the end we call "m_pUniverse->Run();". This actually starts the build thread. What it does is continuously look at the position of the god object which we have attached to the camera above in the code, and build the planet with the appropriate LOD based on it's proximity to the various terrain chunks.........Let's not bore you with more text or boring pictures. Instead we will bore you with a boring video: As you can see it generates terrain reasonably fast. But there is still a lot more we can do. First off we should eliminate the backside of the planet. Note that as we descend towards the planet the backside becomes bigger and bigger as the horizon becomes closer and closer to the camera. This is one advantage of a spherical world. Second we can add a lot more threads. In general we try to cache as much data as possible. What we can still do is pre-generate our octree at one level down using a fractal function pipeline. In general most the CPU time is spent in the fractal data generation, so it makes sense to put add more threading there. Fortunately this is one of the easier places we can use threading. For our next entry we hope to go all the way down to the surface and include some nominal shading.
  5. How to calculate angle between two points from a third point with the help of D3DXMATH library?
  6. I was wondering if anyone knows of any tools to help design procedural textures. More specially I need something that will output the actual procedure rather than just the texture. It could output HLSL or some pseudo code that I can port to HLSL. The important thing is I need the algorithm, not just the texture so I can put it into a pixel shader myself. I posted the question on the Allegorithmic forum, but someone answered that while Substance Designer uses procedures internally, it doesn't support output of code, so I guess that one is out.
  7. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  8. Hi, I am doing some hobby project where I am trying to make a game work with Oculus. I do not have game's source code, but I believe it supplies Open VR API with DX10 texture (game itself is DX9, I believe internally they convert DX9 texture to DX10 for submission to Steam VR/Open VR). I originally tried to do everything in DX10 on my side, but I don't think Oculus supports DX10. So, now I'd like to try converting DX10 texture to DX11 and try using that in Oculus' Texture Swap Chain. I've couple of questions: * could someone suggest a way to convert DX10 texture to DX11? I am going to try using techniques described here https://docs.microsof t.com/en-us/windows/desktop/direct3darticles/surface-sharing-between-windows-graphics-apis but I am not a DX guru, in fact I am not a graphics programmer and last time I touched D3D was 15 years ago , and if someone could provide me with simpler approach, that'd be very helpful Or, at lease, could someone clarify if article I referenced is indeeded what I need to do? * given texture pointer, how can I figure out if it is indeed DX10 texture? I was able to get Description, and it seems to be filled reasonably: ID3D10Texture2D *src = (ID3D10Texture2D*)texture->handle; D3D10_TEXTURE2D_DESC srcDesc; src->GetDesc(&srcDesc); but how could i for example, tell if it is DX10 or DX10.1 texture? * looks like I will have to instantiate DX11 device myself. Is there any harm in having multiple D3D11 device instantiated (per swap chain) of do I need to share single device? Thanks for your help.
  9. Hello, I want to upgrade my program from D3D9 to D3D11, since I need more power in the newer API, the unfortunate thing is D3D11 no longer supports the ID3DXMesh interface, and I need to supply a fvf and a D3D9 device to it in order to create one, the D3D9 device is the thing I don't have when I start the application as a D3D11 application, of course. And I don't have the fvf handy. However, I can't live without a ID3DXMesh interface, because my original program is totally driven by it. how do I get it work again? thanks Jack
  10. I call CopyResource to get some data from the GPU to the CPU. If I try to Map immediately I obviously suffer performance hit. So I have a round buffer and I call Map delayed by, say, 3-4 frames. This works way better. Now my question is: how do I know after how many frames I can safely Map without incurring additional performance hit? I have noticed for instance that when I delay by one more frame I get linearly better performance, but after about 5-6 frames, the more frames I wait, the performance stays the same. Is there a way to "Query" DX11 to know when it's "safe" to Map a resource?
  11. Hi, I am trying to initialize SkyBox/Cubemap in Oculus app. Oculus's sample shows how to initialize texture swap chain texture. The difference in my use case is that I do not initialize from .dds bytes read from disk, I already have ID3D11Texture2D. I see samples online that allow getting Texture bytes, but that will involve CPU copying of memory, I wonder if it can be avoided. Here's roughly what I am doing: int numFaces = 6; for (int i = 0; i < numFaces; ++i) { ID3D11Texture2D *faceSrc = textures->handle; ++textures; context->UpdateSubresource(tex, i, nullptr, (const void*)faceSrc, srcDesc.Width * 4, srcDesc.Width * srcDesc.Height * 4); } However, that crashes with AV in Nvidia driver. Any suggestions? Thanks!
  12. Hi guys, Should I use MSAA surfaces (and corresponding depth buffers) when drawing data like positions, normals, and depth to use in stuff like SSAO ? And what about when drawing the SSAO itself ? I'm doing forward rendering for the scene and using MSAA for that.
  13. Gnollrunner

    Mountain Ranges

    For this entry we implemented the ubiquitous Ridged Multi-fractal function. It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine. First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just height-maps. However if we look at a typical world these features are the exception rather than the rule. It therefor makes sense to optimize the height-map portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated. Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ...... Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache height-map function values to avoid repeated calculations. We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates. Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry. Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights. Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghost-walking which happens every LOD/re-chunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found. Even if no geometry is found we may want to keep them for the next ghost-walk search. To solve this problem, we implemented undead-objects. Unit sphere faces can become undead and can even be created that way if they are built by the ghost-walker. When they are undead they are kept in a special list which keeps them psudo-alive. They also have an un-dead life value associated with them. When they are touched by the ghost-walker that value is renewed. However if after a few iterations they are untouched, they become truly dead and are destroyed. Picture time again..... So here is our Ridged Multi-Fractal in wire frame. We'll flip it around to show our level transition........ Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/re-chunk operations, and another one for the graphics .
  14. Hello everyone! I want to remake a Direct3D 11 C++ application for Android. I'm not familiar with any engines or libraries on the current time, only with pure Direct3D and OpenGL (including OpenGL ES), but I'm ready to learn one. Which library/engine should I choose? On the current time I'm considering LibGDX for this purpose, but I heard that it's not very suitable for 3D. I was also considering OpenGL ES (with Java) but I think it will be tricky to improve the game in this case (I'm planning to use an animated character, particles in the game). Performance is one of the main requirements to the game. I would also wish to have a possibility to compile the code for iOS or easy remake the code for this platform. Thanks in advance!
  15. I'm getting an odd problem with DX11. I kind of solved it but I don't understand why it didn't work the first way. What I'm trying to do is create a bunch of meshes (index and vertex buffers) in one thread but render them in a second thread. I don't don't render the same meshes I'm currently creating. I build a whole set of new meshes and then when everything is ready, the build thread tells the render thread to swap to the new set. This worked most of the time except for once in a while one of the meshes would be corrupted. It was definitely the mesh generation or copy, and not the render because a corrupted mesh would stick around until the next mesh update, then it would disappear. At first I thought it might be in my CPU side mesh generation code. I build meshes in my own mesh format and then I translate and copy straight to DX11 using ID3D11DeviceContext::map. I am aware that the device context is not thread safe so I guard it with a mutex to make sure I'm not trying to use it in two threads at the same time. Before I did this the program with would simply crash. But afterwards I would only get occasional mesh corruption. Finally just to try something else I put a mutex around the whole scene render code and then used that same mutex in the other thread around the CPU to DX11 mesh copy section. This solved the problem. However I don't understand why I should be forced to so this since I was protecting the graphics context before. Is there something I'm missing here? Should I even be calling DX11 from more than one thread? Supposedly it's thread safe except for the graphics context.
  16. I have a process that creates D3D11 shared texture accoring to https://docs.microsoft.com/en-us/window ... phics-apis we can open in Direct3D9Ex a shared textures previously created by non-DX9 APIs The texture has DXGI_FORMAT_B8G8R8A8_UNORM format I'm trying to open it like that: D3DDevice->CreateTexture(width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture, &shared_handle) (usege=D3DUSAGE_RENDERTARGET makes no difference) but it says:"Direct3D9: (ERROR) :Opened and created resources don't match, unable to open the shared resource. any thoughts?
  17. Here's my dilemma..... I would like to use a physics engine but I'm not sure if it's practical for my project. What I need currently is fairly simple. I need mesh collision and response with a pill shaped object (i.e. a character). The thing is, I build my geometry at run time, and it goes straight into an octree. It's actually built after I figure out where the character is going in kind of a "Just In Time" fashion. Also, it's my own custom mesh format. I'd rather not take my mesh and put it in some 3rd party format because basically everything I need is already exists, i.e. faces, edges, vertexes, face normals and the octree. So I'm wondering if there is an engine that will somehow let me use my own octree and for instance let me register callbacks to pass in the mesh data as needed.
  18. Some computer configurations have multiple GPUs, e.g. gaming laptop with an Intel HD Graphics chip and a GeForce or Radeon chip. When enumerating through the available adapters on a computer, the Intel chip is the first one in most cases. However, I want to use the best adapter for my game. So I wrote the code below to create my swap chain: protected void CreateDevice(Size clientSize, IntPtr outputHandle) { ProcessLogger.Instance.StartFunction(this, "CreateDevice"); // Set swap chain flags, DXGI format and default refresh rate. _swapChainFlags = SharpDX.DXGI.SwapChainFlags.None; _dxgiFormat = SharpDX.DXGI.Format.R8G8B8A8_UNorm; SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); // Get proper video adapter and create device and swap chain. using (var factory = new SharpDX.DXGI.Factory1()) { SharpDX.DXGI.Adapter adapter = GetAdapter(factory); if (adapter != null) { ProcessLogger.Instance.Write(String.Format("Selected adapter: {0}", adapter.Description.Description)); // Get refresh rate. refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate); ProcessLogger.Instance.Write(String.Format("Selected refresh rate = {0}/{1} ({2})", refreshRate.Numerator, refreshRate.Denominator, refreshRate.Numerator / refreshRate.Denominator)); // Create Device and SwapChain ProcessLogger.Instance.Write("Create device."); _device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); ProcessLogger.Instance.Write("Create swap chain."); _swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate)); ProcessLogger.Instance.Write("Store device context."); _deviceContext = _device.ImmediateContext; } } ProcessLogger.Instance.EndFunction(this, "CreateDevice"); } For this function to work properly, I have to select the proper adapter in GetAdapter. Here is what it looks like: private SharpDX.DXGI.Adapter GetAdapter(SharpDX.DXGI.Factory1 factory) { List<SharpDX.DXGI.Adapter> adapters = new List<SharpDX.DXGI.Adapter>(); for (int i = 0; i < factory.GetAdapterCount(); i++) { SharpDX.DXGI.Adapter adapter = factory.GetAdapter(i); if (SharpDX.Direct3D11.Device.IsSupportedFeatureLevel(adapter, SharpDX.Direct3D.FeatureLevel.Level_10_1)) adapters.Add(adapter); } try { foreach (var adapter in adapters) if (adapter.Description.Description != null && (adapter.Description.Description.Contains("GeForce") || adapter.Description.Description.Contains("Radeon"))) { return adapter; } } catch { } return adapters.First(); } So all I am doing is asking my Factory1 for a list of all adapters that support FeatureLevel 10.1 and search for a "GeForce" or "Radeon" one. Very simple and use that one. This works like a charm. HOWEVER, there is one big problem: When I use this code in the Release build the game crashes when going to fullscreen using the code below. public void SetFullscreenState(bool isFullscreen) { if (isFullscreen != _swapChain.IsFullScreen) _swapChain.SetFullscreenState(isFullscreen, null); } The error code is DXGI_ERROR_UNSUPPORTED and after doing some research I found out, that this problem only happens for the Release build but not for the Debug build. The Debug build works like a charm. It also crashes only, if the selected adapter is not the first one in the list. So, if I use the first adapter (factory.GetAdapter(0)), it works! If I change my computer settings so that my GeForce is used as primary adapter for my game, it works. It only does not work for the Release build, if the selected adapter is not the first adapter in the list and I can't figure out why... The problem is independent from the used screen or other running applications in a multi-windowed application. I already tested that.
  19. Gnollrunner

    Bumpy World

    After a LOT of bug fixes, and some algorithm changes changes, our octree marching prisms algorithm is now in a much better state. We added a completely new way of determine chunk level transitions, but before discussing it we will first talk a bit more about our voxel octree. Our octree is very explicit. By that we mean it is built up of actual geometric components. First we have voxel vertexes (not to be confused with mesh vertexes) for the corners of voxels. Then we have voxel edges that connect them. Then we have voxel faces which are made up of a set of voxel edges (either three or four) and finally we have the voxels themselves which reference our faces. Currently we support prismatic voxels. since they make the best looking world, however the lower level constructs are designed to also support the more common cubic voxels. In addition to our octree of voxels, voxel faces are kept in a quadtrees, while voxel edges are organized into binary trees. Everything is pretty much interconnected and there is a reference counting system that handles deallocation of unused objects. So why go though all this trouble? The answer by doing things this way we can avoid traversing the octree when building meshes using our marching prisms algorithms. For instance, If there is a mesh edge on a voxel face, since that face is referenced by the voxels on either side of it, we can easily connect together mesh triangles generated in both voxels. The same goes for voxel edges. A mesh vertex on a voxel edge is shared by all voxels that reference it. So in short, seamless meshes are built in place with little effort. This is important since meshes will be constantly recalculated for LOD as a player moves around. This brings us to chunking. As we talked about in our first entry, a chunk is nothing more than a sub-section of the octree. Most importantly we need to know where there are up and down chunk transitions. Here our face quadtrees, and edge binary tress help us out. From the top of any chunk we can quickly traverse the quadtrees and binary trees and tag faces and edges as transition objects. The algorithm is quite simple since we know there will only be one level difference between chunks, and therefore if there is a level difference, one level will be odd and the other even. So we can tag our edges and faces with up to two chunk levels in a 2 element array indexed by the last bit of the chunk level. After going down the borders of each chunk, border objects will now have one of two states. They will be tagged with a single level or a two levels one being one higher than the other. From this we can now generate transition voxels with no more need to look at a voxel's neighboring voxels. One more note about our explicit voxels, since they are in fact explicit there is no requirement that they form a regular grid. As we said before our world grid is basically wrapped around a sphere which gives us fairly uniform terrain no matter where you are on the globe. Hopefully in he future we can also use this versatility to build trees. Ok so it's picture time......... We added some 3D simplex nose to get something that isn't a simple sphere. Hopefully in our next entry we will try a multi-fractal.
  20. Hello, I recently found out about the "#line" directive in HLSL. Because I handle #include's manually, the line numbers in shader compilation errors are incorrect and dynamically adding these "#line" directives while loading the shader solves this problem which saves me a lot of time as I know precisely where to look when I make an error. However I noticed when I enable this, I can no longer debug shaders using Visual Studio Graphics Debugger. f I want to debug eg. the vertex shader, it asks me for the source file (while the right source file is there in the background! See the image). is there some kind of bug with Visual Studio or is this just an annoying side-effect from using "#line"? Anyone has experience with this? Cheers!
  21. Is it reasonable to use Direct2D for some small 2D games? I never did too much of Direct2D stuff, mostly I used it for displaying text/2D GUI for Direct3D engine etc. but I never tried doing game in it. Is it better to use Direct2D and sprites or would you prefer to go with D3D but with 2D shaders // is D2D not meant for games, no matter how big or small, at all?
  22. Hello, I have regular matrix-based skinning on the GPU working for quite a while now and I stumbled upon an implementation of Dual Quaternion skinning. I've had a go at implementing this in a shader and after spending a lot of time on making small changes to the formulas to make it work, I sort of got it working but there seems to be an issue when blending bones. I found this pretty old topic on GameDev.net ( ) which, I think, describes my problem pretty well but I haven't been able to find the problem. Like in that post, if the blendweight of a vertex is 1, there is no problem. Once there is blending, I get artifacts Just for the sake of just focussing on the shader side of things first, I upload the dual quaternions to GPU which are converted from regular matrices (because I knew they should work). Below an image comparison between matrix skinning (left) and dual quaternion skinning (right): As you can see, especially on the shoulders, there are some serious issues. It might be because of a silly typo however I'm surprised some parts of the mesh look perfectly fine. Below some snippets: //Blend bones float2x4 BlendBoneTransformsToDualQuaternion(float4 boneIndices, float4 boneWeights) { float2x4 dual = (float2x4)0; float4 dq0 = cSkinDualQuaternions[boneIndices.x][0]; for(int i = 0; i < MAX_BONES_PER_VERTEX; ++i) { if(boneIndices[i] == -1) { break; } if(dot(dq0, cSkinDualQuaternions[boneIndices[i]][0]) < 0) { boneWeights[i] *= -1; } dual += boneWeights[i] * cSkinDualQuaternions[boneIndices[i]]; } return dual / length(dual[0]); } //Used to transform the normal/tangent float3 QuaternionRotateVector(float3 v, float4 quatReal) { return v + 2.0f * cross(quatReal.xyz, quatReal.w * v + cross(quatReal.xyz, v)); } //Used to transform the position float3 DualQuatTransformPoint(float3 p, float4 quatReal, float4 quatDual) { float3 t = 2 * (quatReal.w * quatDual.xyz - quatDual.w * quatReal.xyz + cross(quatDual.xyz, quatReal.xyz)); return QuaternionRotateVector(p, quatReal) + t; } I've been staring at this for quite a while now so the solution might be obvious however I fail to see it. Help would be hugely appreciated Cheers
  23. So I'm getting to the point in my engine design that I'm trying to implement some dynamic menus. It's going well, until I've hit a recent conceptual stumbling block that I'm not how to sure to get around. It has to do with drawing resizable, internally scrollable windows inside each monitor. Some background first. I'm creating a multi-monitor capable game, so I needed to roll my own GUI. The window & menu system works, for fixed sized windows. My menus are all made many sprites, utilizing DirectXTK SpriteBatch, and text via SpriteFont. I load the images using WICTextureLoader. For the case where I have activated elements in a window exceed the size of the window, that's okay. They need to be visible, and there's only one active element at a time. The problem comes with hiding passive elements as they are scrolled out of their parent window, and then drawn outside of it. Also of note: I have a *single* application drawing to all of the monitors, not multiple applications drawing to multiple monitors. So solutions based around options that use the singleton pattern are not workable. (This, sadly, omits most GUI frameworks). I've looked at using viewports, but I can't seem to get them to work properly when I'm dealing with multi-monitor displays, especially when the monitors have different resolutions, and are not aligned perfectly. I'm not even sure where to begin if I have overlapping viewports, ie, overlapping windows. Should I keep struggling with viewports, or should I look into a way of dynamically clipping the images/text/sprites as they are edged out of the windows? If so, what is such a technique called, and where can I read up more on it? Or am I missing a forest for the trees, and there's an easy solution to my problem that I've overlooked?
  24. Hey, I've come across some odd problem. I am using DirectX 11 and I've tested my results on 2 GPUs: Geforce GTX660M and GTX1060. The strange behaviour occurs surprisingly on the newer GPU - GTX1060. I am loading HDR texture into DirectX and creating its shader resource view with DXGI_FORMAT_R32G32B32_FLOAT format: D3D11_SUBRESOURCE_DATA texData; texData.pSysMem = data; //hdr data in as a float array with rgb channels texData.SysMemPitch = width * (4 * 3);//size of texture row in bytes (4 bytes per each channel rgb) DXGI_FORMAT format = DXGI_FORMAT_R32G32B32_FLOAT; //the remaining (not set below) attributes have default DirectX values Texture2dConfigDX11 conf; conf.SetFormat(format); conf.SetWidth(width); conf.SetHeight(height); conf.SetBindFlags(D3D11_BIND_SHADER_RESOURCE); conf.SetCPUAccessFlags(0); conf.SetUsage(D3D11_USAGE_DEFAULT); D3D11_TEX2D_SRV srv; srv.MipLevels = 1; srv.MostDetailedMip = 0; ShaderResourceViewConfigDX11 srvConf; srvConf.SetFormat(format); srvConf.SetTexture2D(srv); I'm sampling this texture using linear sampler with D3D11_FILTER_MIN_MAG_MIP_LINEAR and addressing mode: D3D11_TEXTURE_ADDRESS_CLAMP. This is how I sample the texture in a pixel shader: SamplerState linearSampler : register(s0); Texture2D tex; ... float4 psMain(in PS_INPUT input) : SV_TARGET { float3 color = tex.Sample(linearSampler, input.uv).rgb; return float4(color, 1); } First of all, I'm not getting any errors during runtime in release and my shader using this texture gives correct result on both GPUs. In debug mode I'm also getting correct results on both GPUs but I'm also getting following DX error (in output log in Visual Studio) when debugging the app and only on the GTX1060 GPU: D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The Shader Resource View in slot 0 of the Pixel Shader unit is using the Format (R32G32B32_FLOAT). This format does not support 'Sample', 'SampleLevel', 'SampleBias' or 'SampleGrad', at least one of which may being used on the Resource by the shader. The exception is if the corresponding Sampler object is configured for point filtering (in which case this error can be ignored). This also only applies if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #371: DEVICE_DRAW_RESOURCE_FORMAT_SAMPLE_UNSUPPORTED] Despite this error, the result of the shader is correct... This doesn't seem to make any sense... Is this possible that my graphics driver (I updated to the newest version) on GTX1060 doesn't support sampling R32G32B32 textures in pixel shader? This sounds like pretty basic functionality to support... R32G32B32A32 format works flawlessly in debug/release on both GPUs.
  25. Hello Guys, I'm using scharpDX 4, and wonder why the matrix class uses row_major matrixes, because HLSL uses column_major Can anyone tell me if there is a specific reason why the matrix class uses exactly the wrong matrix order?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!