Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1557 results

  1. Gnollrunner

    Anyone use Dear ImGUI?

    I was looking for a GUI API I could use with DirectX11 for my project. My search led me to Dear ImGui. I was wondering if anyone here has tried it and had any comments on it. I'm mostly interested in HUD stuff but there will be some menus for inventory and the like. Also if you know of some other API that I might look into I would be interested on hearing about that too.
  2. I have set up a normal shadow mapping but the result is not so good. So, I decided to do cascaded shadow mapping. Can anyone point out to me a good source or place to start with?
  3. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  4. Hi Guys, I have a problem where rendering direct to the back buffer is fine. But if I create a render target and draw to it, then render it to the back buffer, the sprite transparency eats holes through anything on the RT. Which then reveals the back buffer itself. On the right side the render target is coloured a dark red to highlight the problem. The blend state is being created and set early on (shortly after context creation) and is working ok as the back buffer is behaving as expected. // Create default blend state ID3D11BlendState* d3dBlendState = NULL; D3D11_BLEND_DESC omDesc; ZeroMemory(&omDesc, sizeof(D3D11_BLEND_DESC)); omDesc.RenderTarget[0].BlendEnable = true; omDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; omDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; omDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; omDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; omDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; if (FAILED(d3dDevice->CreateBlendState(&omDesc, &d3dBlendState))) return E_WINDOW_DEVICE_BLEND_STATE; d3dContext->OMSetBlendState(d3dBlendState, 0, 0xffffffff); if (d3dBlendState) d3dBlendState->Release(); Any ideas would be greatly appreciated. Thanks in advance.
  5. This is pretty much a question for @SoldierOfLight, probably.. I've read a ton of information about the different flip modes and the various ways of configuring the swap chain. Would really like to get down to near 0ms latency at 60fps. My GPU is somewhat old - NVidia GTX 430 - but my software is up to date. Latest NVidia drivers, latest Windows 10 (April 2018 version 1803) PresentMon indicates dwm.exe is "Hardware: Legacy Flip" (not sure if this is important but thought I'd include it since 'Legacy' sounds bad) If I run windowed, PresentMon indicates "Composed: Flip" with a latency around 48ms If I run fullscreen with SetFullscreenState(true), PresentMon indicates "Hardware Composed: Independent Flip: Plane 0" with a latency around 46ms If I run fullscreen as just a borderless window covering the whole screen, PresentMon indicates "Hardware Composed: Independent Flip: Plane 0" and around 32ms latency In windowed mode, DXGI_SWAP_CHAIN_DESC1 setup is: swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD; swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT; SetMaximumFrameLatency is 1 frame (DISCARD seems to have the same latency as SEQUENTIAL) In fullscreen mode with SetFullscreenState, I find I have to remove the WAITABLE_OBJECT flag - if I don't, DX gives an error when SetFullscreenState is called. Running in DX debug mode, it logs a message saying that the WAITABLE_OBJECT flag can't be combined with fullscreen (although I've seen other posts claiming that this restriction was lifted at some point?? not on my machine hehe) when I call present, I'm just calling swapChain->Present(1,0) Questions: 1) Why can't I combine WAITABLE_OBJECT with SetFullscreenState? 2) Do I need to use SetFullscreenState anyway? Currently the lowest latency is just borderless window covering the screen, with 32ms latency. But why is it not 16ms? 3) Why is SetFullscreenState slower, at 48ms latency? It's worth mentioning that I am in this case also creating a borderless window that covers the screen.. and then calling SetFullscreenState on that window.. maybe that's confusing the system (?) 4) Is "Hardware Composed: Independent Flip: Plane 0" the best I can hope for or is there some other flip mode that is optimal? If so, what changes do I need to make to the code to get there? ------- More information after further testing: With the borderless fullscreen window (not using SetFullscreenState), loop looks like this: 1) WaitForSingleObject(WAITABLE_OBJECT) 2) Spin loop for 15ms (almost the entire duration of the frame) <-- added after writing the original post 3) read controller/user inputs 4) Draw the next frame of the game 5) Present With the above, PresentMon indicates around 17ms latency with "Hardware Composed: Independent Flip: Plane 0" Is this as good as I can do or can I somehow get the latency reported by PresentMon even lower? I am measuring controller-to-display latency with a 240hz camera and a gamepad with an LED wired into the start button. I am seeing as low as 5 240hz frames (just over 16ms latency) between the LED lighting up on the controller and visible results appearing on screen. But, sometimes I see up to 14 240hz frames. The average is probably around 8/9 frames. Have I minimized the latency from the perspective of the application? For some reason I feel like I should be able to achieve very close to 0ms latency. Conceptually if I wait until the very end of a vertical refresh cycle.. then sample the user input, draw the game, call Present() *right* before the gpu is ready to display the next frame.. then it would get my back buffer and swap it to front only 0-2 ms after I call Present. How do I get to a solution like this? If you're curious I'm using the Dell 2414H monitor which is reviewed to have 4ms latency, and other tests I've done with dedicated hardware more or less confirm this (http://www.tftcentral.co.uk/reviews/dell_u2414h.htm#lag) Thanks!
  6. Hello, Just wanted to share the link to the latest upgrade for the Conservative Morphological Anti-Aliasing, in case someone is interested. It is a post-process AA technique in the same class of approaches as FXAA & SMAA but focusing on minimizing the input image change - that is, apply as much anti-aliasing as possible while avoiding blurring textures or other sharp features. Details available on https://software.intel.com/en-us/articles/conservative-morphological-anti-aliasing-20 and full DX11 source code under MIT license available on https://github.com/GameTechDev/CMAA2/ (compute shader implementation, DX12 & Vulkan ports are in the works too!)
  7. Hi all I am attempting to follow the Rastertek tutorial http://www.rastertek.com/dx11tut37.html Right now I am having a problem, it appears that my input layout is not being initialized properly and I'm not sure why, an exception is being thrown when i call CreateInputLayout... Exception thrown at 0x00007FFD9B8EA388 in MyGame.exe: Microsoft C++ exception: _com_error at memory location 0x0000000D4D18ED30. Maybe you all can point out where I'm going wrong here? void Renderer::InitPipeline() { // load and compile the two shaders ID3D10Blob *VS, *PS; D3DX11CompileFromFile("Shaders.shader", 0, 0, "VShader", "vs_4_0", 0, 0, 0, &VS, 0, 0); D3DX11CompileFromFile("Shaders.shader", 0, 0, "PShader", "ps_4_0", 0, 0, 0, &PS, 0, 0); // encapsulate both shaders into shader objects dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS); dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS); // set the shader objects devcon->VSSetShader(pVS, 0, 0); devcon->PSSetShader(pPS, 0, 0); // create the input layout object D3D11_INPUT_ELEMENT_DESC ied[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, // Add another input for the instance buffer { "INSTANCE", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1} }; dev->CreateInputLayout(ied, 2, VS->GetBufferPointer(), VS->GetBufferSize(), &pLayout); devcon->IASetInputLayout(pLayout); } If I have not provided enough information, please help me understand what is needed so I can provide the info.
  8. Hi all, I'm trying to cut down on some of the spaghetti in my code after running through a few lessons and tutorials. Currently, I have everything grouped in a "Renderer" class and I'm trying to break my larger functions down into more manageable bits. I have three main functions that are initializing all my d3d and graphics: void Renderer::InitD3D(HWND hWnd) { // create a struct to hold information about the swap chain DXGI_SWAP_CHAIN_DESC scd; // clear out the struct for use ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); // fill the swap chain description struct scd.BufferCount = 1; // one back buffer scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color scd.BufferDesc.Width = SCREEN_WIDTH; // set the back buffer width scd.BufferDesc.Height = SCREEN_HEIGHT; // set the back buffer height scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used scd.OutputWindow = hWnd; // the window to be used scd.SampleDesc.Count = 4; // how many multisamples scd.Windowed = TRUE; // windowed/full-screen mode scd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; // allow full-screen switching // create a device, device context and swap chain using the information in the scd struct D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &swapchain, &dev, NULL, &devcon); // get the address of the back buffer ID3D11Texture2D *pBackBuffer; swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target dev->CreateRenderTargetView(pBackBuffer, NULL, &backbuffer); pBackBuffer->Release(); // set the render target as the back buffer devcon->OMSetRenderTargets(1, &backbuffer, NULL); // Set the viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = SCREEN_WIDTH; viewport.Height = SCREEN_HEIGHT; devcon->RSSetViewports(1, &viewport); } void Renderer::InitPipeline(ID3D11Device * dev, ID3D11DeviceContext * devcon) { // load and compile the two shaders ID3D10Blob *VS, *PS; D3DX11CompileFromFile("Shaders.shader", 0, 0, "VShader", "vs_4_0", 0, 0, 0, &VS, 0, 0); D3DX11CompileFromFile("Shaders.shader", 0, 0, "PShader", "ps_4_0", 0, 0, 0, &PS, 0, 0); // encapsulate both shaders into shader objects dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS); dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS); // set the shader objects devcon->VSSetShader(pVS, 0, 0); devcon->PSSetShader(pPS, 0, 0); // create the input layout object createInputLayout(dev, devcon, VS); } void Renderer::InitGraphics(std::vector<Renderable_Object*> Game_Objects) { for (int i = 0; i < Game_Objects.size(); i++) { for (int j = 0; j < Game_Objects[i]->getVertices().size(); j++) { OurVertices.push_back(Game_Objects[i]->getVertices()[j]); } } createVertexBuffer(dev); createInstanceBuffer(dev); createProjectionBuffer(dev); createWorldBuffer(dev); } I have InitD3D, which creates my device, swapchain, and device context. This seems to make sense to me, its all of the "background" work. The comes InitPipeline, this function in my mind, should be in charge of preparing the pathway in which we will shove data into our shaders for rendering. Then, I have my InitGraphics, which really all I want it to do is setup the data to be shoved down previously mentioned pipeline, right now in my mind i think it is doing more than that. Am I right in thinking creating buffers is part of the pipeline setup, and then after the setup is done, updating those buffers is part of the graphics initiation I am thinking of. Again, three Phases: Initialize D3D window and device and crap -> initialize pipeline and way of sending data to it -> push data onto the pipeline for initial rendering One more side question: do I need to use the same subresource that I used to initialize a buffer when I go to update it with map/unmap? Or do I set a new subresource_data? I'm starting to think I need the original subresource_data, or at least a pointer to it, because when I call CreateBuffer I pass it a reference to a subresource_data object... Sorry all, I tend to ask before I invetigate, given that this code here works just fine for me, it would suggest that I do not need the original subresource, all I need is the data I'm going to update with. if (createVertexBuffer(dev)) { // copy the vertices into the buffer D3D11_MAPPED_SUBRESOURCE ms; devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer memcpy(ms.pData, OurVertices.data(), sizeof(Vertex) * OurVertices.size()); // copy the data devcon->Unmap(pVBuffer, NULL); // unmap the buffer } So, this means I can successfully separate my buffer creation, from my buffer initialization. I can create my buffers as the cars that will carry crap down the pipeline, in the init pipeline funciton, then I can put people in said cars in the initgraphics function.
  9. Hi all! I'm trying to implement the technique: Light Indexed Deferred Rendering. I modified the original demo: 1) Has removed some UI elements. 2) Has removed View Space light calculation, 3) I fill the Light indices during startup. 4) Optional: I tried to use UBO instead of Texture1D uncomment //#define USE_UBO My modified version of demo My implementation details: I use the constant buffers instead of Texture1D for storing of the light source information. Instead of OpenGL, I use Direct3D11. My implementation is divided on following parts: 1) Packing of light indices for each light during startup: void LightManager::LightManagerImpl::FillLightIndices() { int n = static_cast<int>(lights.size()); for (int lightIndex = n - 1; lightIndex >= 0; --lightIndex) { Vector4D& OutColor = lightIndices.push_back(); // Set the light index color ubyte convertColor = static_cast<ubyte>(lightIndex + 1); ubyte redBit = (convertColor & (0x3 << 0)) << 6; ubyte greenBit = (convertColor & (0x3 << 2)) << 4; ubyte blueBit = (convertColor & (0x3 << 4)) << 2; ubyte alphaBit = (convertColor & (0x3 << 6)) << 0; OutColor = Vector4D(redBit, greenBit, blueBit, alphaBit); const float divisor = 255.0f; OutColor /= divisor; } } 2) Optional/Test implementation: Update lights positions(animation). 3) Rendering The Light source Geometry into RGBA RenderTarget (Light sources Buffer) using 2 shaders from demo: Pixel shader: uniform float4 LightIndex : register(c0); struct PS { float4 position : POSITION; }; float4 psMain(in PS ps) : COLOR { return LightIndex; }; Vertex shader: uniform float4x4 ViewProjMatrix : register(c0); uniform float4 LightData : register(c4); struct PS { float4 position : POSITION; }; PS vsMain(in float4 position : POSITION) { PS Out; Out.position = mul(float4(LightData.xyz + position.xyz * LightData.w, 1.0f), ViewProjMatrix); return Out; } These shaders is compiled in 3DEngine into C++ code. 4) Calculating of the final lighting, using the prepared texture with light indices. The pixel shaders can be found in attached project. The final shaders: Pixel: // DeclTex2D(tex1, 0); // terrain first texture DeclTex2D(tex2, 1); // terrain second texture DeclTex2D(BitPlane, 2); // Light Buffer struct Light { float4 posRange; // pos.xyz + w - Radius float4 colorLightType; // RGB color + light type }; // The light list uniform Light lights[NUM_LIGHTS]; struct VS_OUTPUT { float4 Pos: POSITION; float2 texCoord: TEXCOORD0; float3 Normal: TEXCOORD1; float4 lightProjSpaceLokup : TEXCOORD2; float3 vVec : TEXCOORD3; }; // Extract light indices float4 GetLightIndexImpl(Texture2D BitPlane, SamplerState sBitPlane, float4 projectSpace) { projectSpace.xy /= projectSpace.w; projectSpace.y = 1.0f - projectSpace.y; float4 packedLight = tex2D(BitPlane, projectSpace.xy); float4 unpackConst = float4(4.0, 16.0, 64.0, 256.0) / 256.0; float4 floorValues = ceil(packedLight * 254.5); float4 lightIndex; for(int i = 0; i < 4; i++) { packedLight = floorValues * 0.25; floorValues = floor(packedLight); float4 fracParts = packedLight - floorValues; lightIndex[i] = dot(fracParts, unpackConst); } return lightIndex; } #define GetLightIndex(tex, pos) GetLightIndexImpl(tex, s##tex, pos) // calculate final lighting float4 CalculateLighting(float4 color, float3 vVec, float3 Normal, float4 lightIndex) { float3 ambient_color = float3(0.2f, 0.2f, 0.2f); float3 lighting = float3(0.0f, 0.0f, 0.0f); for (int i = 0; i < 4; ++i) { float lIndex = 255.0f * lightIndex[i]; // read the light source data from constant buffer Light light = lights[int(lIndex)]; // Get the vector from the light center to the surface float3 lightVec = light.posRange.xyz - vVec; // original from demo doesn't work correctly #if 0 // Scale based on the light radius float3 lVec = lightVec / light.posRange.a; float atten = 1.0f - saturate(dot(lVec, lVec)); #else float d = length(lightVec) / light.posRange.a; const float3 ConstantAtten = float3(0.4f, 0.01f, 0.01f); float atten = 1.0f / (ConstantAtten.x + ConstantAtten.y * d + ConstantAtten.z * d * d); #endif lightVec = normalize(lightVec); float3 H = normalize(lightVec + vVec); float diffuse = saturate(dot(lightVec, Normal)); float specular = pow(saturate(dot(lightVec, H)), 16.0); lighting += atten * (diffuse * light.colorLightType.xyz * color.xyz + color.xyz * ambient_color + light.colorLightType.xyz * specular); } return float4(lighting.xyz, color.a); } float4 psMain(in VS_OUTPUT In) : COLOR { float4 Color1 = tex2D(tex1, In.texCoord); float4 Color2 = tex2D(tex2, In.texCoord); float4 Color = Color1 * Color2; float3 Normal = normalize(In.Normal); // get light indices from Light Buffer float4 lightIndex = GetLightIndex(BitPlane, In.lightProjSpaceLokup); // calculate lightung float4 Albedo = CalculateLighting(Color, In.vVec, Normal, lightIndex); Color.xyz += Albedo.xyz; return Color; } Vertex Shaders: // uniform float4x4 ViewProjMatrix : register(c0); struct VS_OUTPUT { float4 Pos: POSITION; float2 texCoord: TEXCOORD0; float3 Normal: TEXCOORD1; float4 lightProjSpaceLokup : TEXCOORD2; float3 vVec : TEXCOORD3; }; float4 CalcLightProjSpaceLookup(float4 projectSpace) { projectSpace.xy = (projectSpace.xy + float2(projectSpace.w, projectSpace.w)) * 0.5; return projectSpace; } VS_OUTPUT VSmain(float4 Pos: POSITION, float3 Normal: NORMAL, float2 texCoord: TEXCOORD0) { VS_OUTPUT Out; Out.Pos = mul(float4(Pos.xyz, 1.0f), ViewProjMatrix); Out.texCoord = texCoord; Out.lightProjSpaceLokup = CalcLightProjSpaceLookup(Out.Pos); Out.vVec = Pos.xyz; Out.Normal = Normal; return Out; } The result: We can show the Light sources Buffer - texture with light indices:(console command: enableshowlightbuffer 1) If we try to show the light geometry we will see the following result(console enabledrawlights 1) And my the demo of Light indexed deferred rendering: https://www.dropbox.com/s/5t9f5vpg83sspfs/3DMove_multilighting_gd.net.7z?dl=0 1) Try to run demo, moving on terrain using W,A,S,D. 2) Try to show light geometry(console command enabledrawlights 1), light buffer(console command: enableshowlightbuffer 1) What do i do wrong ? how to fix the calculation of lighting ?
  10. Hi, When using FW1FontWrapper for text rendering, text gets aliased if the screen resolution gets changed. e.g Write a text using FW1FontWrapper in a window which has resolution (1/4)th of the full-screen resolution and now switch the window to full screen without changing the font size. You can see the text is getting aliased. Is there any way to make FW1FontWrapper independent of resolution. Thanks
  11. I used DirectX in projects on Borland C++ Builder 6.0. Microsoft .libs don't work with Builder so I tooe special .lib files from here: http://www.clootie.ru/cbuilder/index.html#DX_CBuilder_SDKs Now I've moved to C++ Builder 10 Berlin and have to find a way to attach DirectX to my project again. I've searched the Web but found nothing on how to get access to DirectX in Embarcadero Builders, only old information on Borland Builder and old .libs. DirectX SDK .libs still can't be used with new Builder 10 because of incompatible format. My question is: did anyone use DirectX with Embarcadero Builder and how did you solve .libs problem? Can anyone give me a guide or example on how to make DirectX accessible in your Builder 10 project? Why there is no information on this anywhere?
  12. Hello all, silly question. I've made an assumption that translation calculations should be performed inside the vertex, shader. I realize there might be ways to do it outside of the shader, but it made more sense to me that since I'm already using a shader, I should continue on that way. With all that being said, are there any good tutorials around for simple translation inside the shader? I'm most interested in how I pass offset values by way of translation matrix to my shader. I know how I can perform the calculations, that part I've done before. My lack of understanding falls in the logistics portion. How to communicate the translation matrix (or identity matrix) to the shader in a dynamic way. I'm also making an assumption, could be wrong, that the constant buffer is not the place to do this? School me, meanwhile, I'll keep up the google fight. Edit: Found this gem of a quote from google... "Constant buffers are optimized for constant-variable usage, which is characterized by lower-latency access and more frequent update from the CPU." I may be wrong about my initial assumption as to what constant buffers are for.. Now to ponder how to change the values and push the new values to the shader.. Another Edit: Maybe I need to use UpdateSubResource to change the values in my cbuffer… Hmmmmm
  13. Hello! I made Skinning Function. But that is CPU Skinning.(When I skin high polygon model, very slow) So, I'm making GPU Skinning. But My Program draw model strange! Here' s my code! This is Initalize Code ...//meshes,numjoints..etc else if (checkString == "joints") { Joint tempJoint; fileIn >> checkString; // Skip the "{" for (int i = 0; i < MD5Model.numJoints; i++) { fileIn >> tempJoint.name; if (tempJoint.name[tempJoint.name.size() - 1] != '"') { wchar_t checkChar; bool jointNameFound = false; while (!jointNameFound) { checkChar = fileIn.get(); if (checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } fileIn >> tempJoint.parentID; // Store Parent joint's ID fileIn >> checkString; // Skip the "(" fileIn >> tempJoint.pos.x >> tempJoint.pos.z >> tempJoint.pos.y; fileIn >> checkString >> checkString; // Skip the ")" and "(" // Store orientation of this joint fileIn >> tempJoint.orientation.x >> tempJoint.orientation.z >> tempJoint.orientation.y; tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size() - 1, 1); float t = 1.0f - (tempJoint.orientation.x * tempJoint.orientation.x) - (tempJoint.orientation.y * tempJoint.orientation.y) - (tempJoint.orientation.z * tempJoint.orientation.z); if (t < 0.0f) { tempJoint.orientation.w = 0.0f; } else { tempJoint.orientation.w = -sqrtf(t); } //ADDITION HERE! XMVECTOR quat = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); //const XMMATRIX rotation = XMMatrixRotationQuaternion(quat); MD5Model.invbindrots.resize(MD5Model.numJoints); MD5Model.invbindpos.resize(MD5Model.numJoints); XMStoreFloat4(&MD5Model.invbindrots[i], XMQuaternionInverse(quat)); MD5Model.invbindpos[i] = { -tempJoint.pos.x,-tempJoint.pos.y,-tempJoint.pos.z }; //ENDADDITION std::getline(fileIn, checkString); // Skip rest of this line MD5Model.joints.push_back(tempJoint); // Store the joint into this models joint vector } //calc indices...tris..weights..etc //And this code is calculate bone index and weights! for (int i = 0; i < subset.vertices.size(); i++) { VertexStruct_SkinGPU& vert = subset.vertices[i]; vert.boneIdx[0] = 0; vert.boneIdx[1] = 0; vert.boneIdx[2] = 0; vert.boneIdx[3] = 0; for (int j = 0; j < vert.WeightCount; j++) { if (j >= 4) break; Weight& weight = subset.weights[vert.StartWeight + j]; vert.boneIdx[j] = weight.jointID; vert.weight.x = weight.bias; vert.weight.y = weight.bias; vert.weight.z = weight.bias; } } And this code is update code. void Mesh_Skinned::UpdateMD5ModelGPU(Model3DGPU & MD5Model, float deltaTime, int animation) { MD5Model.animations[animation].currAnimTime += deltaTime; // Update the current animation time if (MD5Model.animations[animation].currAnimTime > MD5Model.animations[animation].totalAnimTime) MD5Model.animations[animation].currAnimTime = 0.0f; // Which frame are we on float currentFrame = MD5Model.animations[animation].currAnimTime * MD5Model.animations[animation].frameRate; int frame0 = floorf(currentFrame); int frame1 = frame0 + 1; // Make sure we don't go over the number of frames if (frame0 == MD5Model.animations[animation].numFrames - 1) frame1 = 0; float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor //std::vector<XMMATRIX> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in //interpolatedSkeleton.resize(MD5Model.animations[animation].numJoints); // Compute the interpolated skeleton std::vector<XMFLOAT3> animated_pos_arr; std::vector<XMFLOAT4> animated_rot_arr; std::vector<XMFLOAT3> skinned_pos_arr_; std::vector<XMFLOAT4> skinned_rot_arr_; for (int i = 0; i < MD5Model.animations[animation].numJoints; i++) { Joint tempJoint; Joint joint0 = MD5Model.animations[animation].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton Joint joint1 = MD5Model.animations[animation].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id // Turn the two quaternions into XMVECTORs for easy computations XMVECTOR joint0Orient = XMVectorSet(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w); XMVECTOR joint1Orient = XMVectorSet(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w); // Interpolate positions tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x)); tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y)); tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z)); // Interpolate orientations using spherical interpolation (Slerp) XMStoreFloat4(&tempJoint.orientation, XMQuaternionSlerp(joint0Orient, joint1Orient, interpolation)); animated_pos_arr.push_back(tempJoint.pos); animated_rot_arr.push_back(tempJoint.orientation); XMVECTOR arotarr = XMVectorSet(animated_rot_arr[i].x, animated_rot_arr[i].y, animated_rot_arr[i].z, animated_rot_arr[i].w); XMVECTOR bindrot = XMVectorSet(MD5Model.invbindrots[i].x, MD5Model.invbindrots[i].y, MD5Model.invbindrots[i].z, MD5Model.invbindrots[i].w); XMFLOAT4 rot; XMVECTOR quat = XMQuaternionMultiply(arotarr, bindrot); XMStoreFloat4(&rot, quat); skinned_rot_arr_.push_back(rot); XMVECTOR bindpos = XMVectorSet(MD5Model.invbindpos[i].x, MD5Model.invbindpos[i].y, MD5Model.invbindpos[i].z, 0); XMVECTOR mul = XMVectorMultiply(quat, bindpos); XMVECTOR aposarr = XMVectorSet(animated_pos_arr[i].x, animated_pos_arr[i].y, animated_pos_arr[i].z, 0); XMVECTOR plus = aposarr + mul; XMFLOAT3 newpos; XMStoreFloat3(&newpos, plus); skinned_pos_arr_.push_back(newpos); } MD5Model.skinned_pos_arr = skinned_pos_arr_; MD5Model.skinned_rot_arr = skinned_rot_arr_; } And This is Shader(HLSL) /////////////////////////////////////////////// // // Skin Texture_Color Shader // Desc: Skin Texture + Vertex Color // /////////////////////////////////////////////// ///////////// // Globals // ///////////// cbuffer MatrixBuffer { matrix worldMat; matrix viewMat; matrix projMat; float3 skinPos[128]; float4 skinRot[128]; }; ////////////// // TypeDefs // ////////////// struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; uint4 boneIdx : BONEID; float4 weight : WEIGHT; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; float3 trans_for_f(float3 v, float3 pos, float4 rot) { return v + 2.0 * cross(rot.xyz,cross(rot.xyz,v) + rot.w * v) + pos; } PixelInputType SkinTextureColorVertexShader(VertexInputType input) { int id0 = int(input.boneIdx.x); int id1 = int(input.boneIdx.y); int id2 = int(input.boneIdx.z); int id3 = int(input.boneIdx.w); PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; float3 result = trans_for_f(input.position, skinPos[id0], skinRot[id0]) * input.weight.x; result += trans_for_f(input.position, skinPos[id1], skinRot[id1]) * input.weight.y; result += trans_for_f(input.position, skinPos[id2], skinRot[id2]) * input.weight.z; result += trans_for_f(input.position, skinPos[id3], skinRot[id3]) * input.weight.w; //output.position = mul(inputpos, boneTransform); // Calculate the position of the vertex against the world, view, and projection matrices. float4 finalresult; finalresult.x = result.x; finalresult.y = result.y; finalresult.z = result.z; finalresult.w = 1.0f; output.position = mul(finalresult, worldMat); output.position = mul(output.position, viewMat); output.position = mul(output.position, projMat); // Store the input color for the pixel shader to use. output.color = input.color; // Store the texture coordinates for the pixel shader. output.tex = input.tex; return output; } Thanks! and sorry for bad english skills....
  14. Hi there, I am getting the following error when trying to copy a BC7 texture Is it because BC7 are not supported? Is copying it from a compute shader in a new uncompressed UAV the only route? Thanks!
  15. Hi all, I ended up figuring out how to accomplish this! So, I had to set a constant buffer, I kept everything very simple.. This is my C++ representation of my Shader cbuffer struct ConstantBuffer { D3DXMATRIX projection; }; And here is my pointer to the constant buffer, as well as the matrix I will create using directx: ID3D11Buffer *pCBuffer; // the constant buffer D3DXMATRIX orthoMatrix; Here is the shader cbuffer: cbuffer ConstantBuffer : register(b0) { matrix projection; } Now, I had to map this stuff, this tutorial here was extremely helpful: https://docs.microsoft.com/en-us/windows/desktop/direct3d11/overviews-direct3d-11-resources-buffers-constant-how-to I was able to translate their example into my own code: ConstantBuffer cbuffer; D3DXMatrixOrthoLH(&orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1); cbuffer.projection = orthoMatrix; // Fill in a buffer description. D3D11_BUFFER_DESC cbDesc; cbDesc.ByteWidth = sizeof(ConstantBuffer); cbDesc.Usage = D3D11_USAGE_DYNAMIC; cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; cbDesc.MiscFlags = 0; cbDesc.StructureByteStride = 0; // Fill in the subresource data. D3D11_SUBRESOURCE_DATA InitData; InitData.pSysMem = &cbuffer; InitData.SysMemPitch = 0; InitData.SysMemSlicePitch = 0; dev->CreateBuffer(&cbDesc, &InitData, &pCBuffer); devcon->VSSetConstantBuffers(0, 1, &pCBuffer); Then, my updated shader code to apply the transformation: VOut VShader(float4 position : POSITION, float4 color : COLOR) { VOut output; output.position = mul(position, projection); output.color = color; return output; } Now, instead of using relative coordinates for my triangle vertices, I use some pixel coordinates: // create a triangle using the VERTEX struct Vertex OurVertices[] = { { D3DXVECTOR2(0, 100), D3DXCOLOR(1.0f, 0.0f, 0.0f, 1.0f) }, { D3DXVECTOR2(100, -100), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) }, { D3DXVECTOR2(-100, -100), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) } };
  16. Hi. does anyone have an idea what would be faster, reading a big texture, where each thread reads a sample from it or reading from a buffer, say 64 floats, so they all read from a much smaller buffer, but each thread reads 64 floats. The thread is running on a compute shader. Thanks.
  17. Hi, I'm trying to create my input layout with shader reflection, but there's something weird happening because in the thaditional form this is my output: Here's the code: D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 } }; UINT numElements = ARRAYSIZE(layout); hr = m_ptrd3dDevice->CreateInputLayout(layout, numElements, m_ptrVSBlob->GetBufferPointer(), m_ptrVSBlob->GetBufferSize(), &m_ptrInputLayout); m_ptrVSBlob->Release(); if (FAILED(hr)) return hr; And when I create the input layout with shader reflection, this is my output: Here's the code: // Reflect shader info ID3D11ShaderReflection* pVertexShaderReflection = nullptr; if (FAILED(D3DReflect(m_ptrVSBlob->GetBufferPointer(), m_ptrVSBlob->GetBufferSize(), IID_ID3D11ShaderReflection, (void**)&pVertexShaderReflection))) { return S_FALSE; } // Get shader info D3D11_SHADER_DESC shaderDesc; pVertexShaderReflection->GetDesc(&shaderDesc); // Read input layout description from shader info std::vector<D3D11_INPUT_ELEMENT_DESC> inputLayoutDesc; for (UINT i = 0; i < shaderDesc.InputParameters; i++) { D3D11_SIGNATURE_PARAMETER_DESC paramDesc; pVertexShaderReflection->GetInputParameterDesc(i, &paramDesc); // Fill out input element desc D3D11_INPUT_ELEMENT_DESC elementDesc; elementDesc.SemanticName = paramDesc.SemanticName; elementDesc.SemanticIndex = paramDesc.SemanticIndex; elementDesc.InputSlot = 0; elementDesc.AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT; elementDesc.InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; elementDesc.InstanceDataStepRate = 0; // determine DXGI format if (paramDesc.Mask == 1) { if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32) elementDesc.Format = DXGI_FORMAT_R32_UINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32) elementDesc.Format = DXGI_FORMAT_R32_SINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32) elementDesc.Format = DXGI_FORMAT_R32_FLOAT; } else if (paramDesc.Mask <= 3) { if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32) elementDesc.Format = DXGI_FORMAT_R32G32_UINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32) elementDesc.Format = DXGI_FORMAT_R32G32_SINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32) elementDesc.Format = DXGI_FORMAT_R32G32_FLOAT; } else if (paramDesc.Mask <= 7) { if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32) elementDesc.Format = DXGI_FORMAT_R32G32B32_UINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32) elementDesc.Format = DXGI_FORMAT_R32G32B32_SINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32) elementDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; } else if (paramDesc.Mask <= 15) { if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_UINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_SINT; else if (paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; } // Save element desc inputLayoutDesc.push_back(elementDesc); } // Try to create Input Layout hr = m_ptrd3dDevice->CreateInputLayout(&inputLayoutDesc[0], inputLayoutDesc.size(), m_ptrVSBlob->GetBufferPointer(), m_ptrVSBlob->GetBufferSize(), &m_ptrInputLayout); //Free allocation shader reflection memory pVertexShaderReflection->Release(); pVertexShaderReflection = nullptr; if (FAILED(hr)) return hr;
  18. Hi everyone, here's a fun one. I've been using DX11 for ages and I've never seen anything like this happen before. I found a bug in an application I'm developing when changing the resolution in full-screen mode. In order to handle arbitrary display mode changes (such as changes to MSAA settings), I destroy and recreate the swap chain whenever the display mode changes (rather than just using IDXGISwapChain::ResizeBuffers and IDXGISwapChain::ResizeTarget). On application startup, the device and initial swapchain are created via D3D11CreateDeviceAndSwapChain. When the display mode changes, this initial swap chain is destroyed, and a new one is created using IDXGIFactory::CreateSwapChain, with a new DXGI_SWAP_CHAIN_DESC. What I've found is that, despite the new DXGI_SWAP_CHAIN_DESC being correct when supplied to IDXGIFactory::CreateSwapChain, if I then retrieve the desc from the resulting swap chain, it has values from the old one. For example, if the resolution is changed, the swap chain is created with new (correct) values for BufferDesc.Width and BufferDesc.Height, but this new swap chain contains the old values which the initial (now destroyed) swap chain desc had. Consequently the back buffer is the wrong size, which leads to obvious problems and errors. Has anyone encountered a similar situation, or can think of anything useful to investigate? Here's a simplified version of the code for the display mode change: m_pDeviceContext->ClearState(); m_pDeviceContext->OMSetRenderTargets(0, nullptr, nullptr); m_pSwapChain->Release(); m_pSwapChain = nullptr; IDXGIFactory *pFactory = nullptr; CheckResult(CreateDXGIFactory(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&pFactory))); IDXGISwapChain *pSwapChain = nullptr; DXGI_SWAP_CHAIN_DESC swapChainDesc = CreateSwapChainDesc(...); // Returns populated DXGI_SWAP_CHAIN_DESC with correct values. CheckResult(pFactory->CreateSwapChain(impl.m_pDevice, &swapChainDesc, &pSwapChain)); DXGI_SWAP_CHAIN_DESC verifySwapChainDesc; ZeroMemory(&verifySwapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); pSwapChain->GetDesc(&verifySwapChainDesc); // swapChainDesc does not equal verifySwapChainDesc.
  19. So, I'm using PerspectiveOffCenterLH (in SharpDX, but that should not matter) to project perspective correct stuff into screen space. Up until last night, it was working well. I could translate on all axes, and I can rotate on the Z axis just fine. So I added y-axis rotation for giggles and that's when things went all weird. Every time I rotated about the Y-axis, my geometry (a simple quad) gets distorted. Now, I have to admit to being a dumbass when it comes to linear algebra, and it's been a very long time since I've had to deal with a projection matrix directly, so it could be that I'm using the wrong tool for the job due to my ignorance. Basically it looks like the vertices on each side stretch off into the distance (more and more as I rotate): I'd like to note that I have another app, where I'm using PerspectiveFovLH, and that's working just fine. So again, this very well could be a case of "the wrong tool for the job". Here's the code I'm using to build up my stuff: // Matrix construction code. Anchor is 0, 0 for now, so we can ignore it. // ViewDimensions is the width and height of the render target. var anchor = DX.Vector2.Zero; DX.Matrix.PerspectiveOffCenterLH(-anchor.X, ViewDimensions.Width - anchor.X, ViewDimensions.Height - anchor.Y, -anchor.Y, MinimumDepth, MaximumDepth, out ProjectionMatrix); // This is my code for combining my view + projection DX.Matrix.Multiply(ref ViewMatrix, ref ProjectionMatrix, out ViewProjectionMatrix); // And this is my code for building the view. DX.Matrix translation = DX.Matrix.Identity; DX.Matrix.RotationYawPitchRoll(_yaw.ToRadians(), 0, _roll.ToRadians(), out DX.Matrix rotation); // NOTE: This doesn't work either. //DX.Matrix.RotateY(_yaw.ToRadians()); DX.Matrix.Multiply(ref rotation, ref translation, out ViewMatrix); // My code in the vertex shader is pretty simple: Vertex output = input; output.position = mul(ViewProjection, output.position); return output; The order of operations is indeed correct. It works just fine when I don't rotate on the Y (or X - but that's not important for today) axis. So, yeah, can someone tell me if I'm dumb and using the wrong projection matrix type? And if I can, an explanation for it would be much appreciated to so I don't make this mistake again (don't go too math crazy, I'm old and my math skills are worse than ever - talk to me like I'm 5).
  20. Hi, We know that it is possible to modify the pixel's depth value using "System Value" semantic SV_Depth in this way: struct PixelOut { float4 color : SV_Target; float depth : SV_Depth; }; PixelOut PS(VertexOut pin) { PixelOut pout; // … usual pixel work pout.Color = float4(litColor, alpha); // set pixel depth in normalized [0, 1] range pout.depth = pin.PosH.z - 0.05f; return pout; } As many post-effect requires the depth value of current pixel (such as fog, screen space reflection), we need to acquire it in the PS. A common way to do that is to render the depth value to a separate texture and sample it in the PS. But I found this method a bit clumsy because we already have depth value stored in the depth-stencil buffer, so I wonder whether it is possible to access from NATIVE depth buffer instead of ANOTHER depth texture. I found this on MSDN: https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-semantics that mentions READ depth data in shader. I tried this in Unity: half4 frag (Vert2Frag v2f, float depth : SV_Depth) : SV_Target { return half4(depth, depth, depth, 1); } However it turns out to be a pure white image, which means this depth values in all pixels are 1. So is that MSDN wrong? Is it possible to sampling a NATIVE depth buffer? Thanks!
  21. Tape_Worm

    Gorgon v3 – Animation

    I got the rework of the animation system for v3 done and up on the git hubs. Naturally, I took this awesome video of it. It’s a music video. But not just any music video. A very bad, cheesy 80’s music video (the best kind). Of course, the music is metal \m/ (done, very poorly, by yours truly). Anyway, that’s all. View the full article
  22. Hello, I made a directional light with diffuse. Now If I rotate my camera I see changes. Some parts naturally becomes darker. I did not pass parameters from variables I directly put them to see if that works. My problem, if I rotate or move cam I see changes but if I rotate or move mesh I cant see changes. What should I add to see the differences when mesh rotated ? Texture2D ShaderTexture : register(t0); Texture2D ShaderTextureNormal : register(t1); SamplerState Sampler : register(s0); cbuffer CBufferPerFrame{ float4 AmbientColor = float4(1.0f,1.0f,1.0f,0.0f); float4 LightColor = float4(1.0f,1.0f,1.0f,1.0f); float3 lightDirection = float3(0.0f,0.0f,-1.0f); } cbuffer CBufferPerObject{ float4x4 worldViewProj; float4x4 world; } struct VS_IN { float4 pos : POSITION; float3 normal : NORMAL; // Normal - for lighting float4 col : COLOR; float2 TextureUV: TEXCOORD0; // Texture UV coordinate }; struct PS_IN { float4 pos : SV_POSITION; float4 col : COLOR; float2 TextureUV: TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 lightDirection:LIGHT; float3 WorldPos:POSITION1; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.pos = mul(input.pos, worldViewProj); output.col = 1.0f-((input.pos.w /* * input.col*/) / (input.pos.z /* *input.col*/)); output.TextureUV = input.TextureUV; output.normal = normalize(mul(float4(input.normal,0), world).xyz); output.lightDirection=float3(0.0f,0.0f,-1.0f); output.tangent = CalculateTangent(input.normal); output.col = input.col; output.WorldPos = mul( input.pos, world ); return output; } float4 PS( PS_IN input ) : SV_Target { float3 sampledNormal = (2*ShaderTextureNormal.Sample(Sampler,input.TextureUV).xyz) - 1.0f; float3x3 tbn = float3x3(input.tangent, input.binormal, input.normal); sampledNormal = mul(sampledNormal, tbn); float4 N = ShaderTextureNormal.Sample(Sampler, input.TextureUV)*2.0f-1.0f; float4 D = ShaderTexture.Sample(Sampler, input.TextureUV); float3 lightDirection = normalize(-1*input.lightDirection); float n_dot_1 = dot (lightDirection, input.normal); float3 ambient = D.rgb * float3(1.0f,1.0f,1.0f) * 0; float3 diffuse=(float3)0; if( N.x == -1 && N.y == -1 && N.z == -1 && N.w == -1) { //HERE if(n_dot_1>0){ diffuse = D.rgb* float3(1,1,1) * 1 * n_dot_1; } input.col.rgb=(ambient+ diffuse); input.col.a = D.a; } else { //Not used for now. //input.col = saturate(dot(N,input.lightDirection)); } return input.col; }
  23. I am using 2 directx devices, one to render in my native C++ plugin and the other belongs to Unity. I do the rendering in my plugin but then I send the render target to the unity graphics pipeline. This works generally, but sometimes there is a flicker in the resulting render. I recorded it and slowed it down and it appears as if the frame is in the process of being rendered, one mesh has missing triangles as if it was in the process of drawing the rest of the triangles, and most of the rest of the meshes are completely missing. My question is, how do I force the render to finish before it is accessed again in the shared resource from unity rendering pipeline? I am already using a mutex lock but that doesn't seam to work, it seems like I need to synchronize from the graphics card. I used a IDXGIKeyedMutex but that actually stopped everything from rendering for some reason, maybe I used it wrong, I don't know. Any help would be much appreciated.
  24. I'm taking my first steps in programming with Direct3D. I have a very basic pipeline setup, and all I want to get from it is an antialiased smooth image. But I get this: First, I can't get rid of stair effect though I have 4x MSAA enabled already in my pipeline (DXGI_SAMPLE_DESC::Count is 4 and Quality is 0): And second, I get this noisy texturing though I have mipmaps generated and LINEAR filtering set in the sampler state. Am I missing something or doing wrong? I would appreciate any advice on that. Here is my code: 1) Renderer class: 2) Vertex shader: 3) Pixel shader: Thank you in advance!
  25. I've been out of the loop for a while and just noticed that term. I tried looking it up on google, but the closest answer was a gamedev topic with a similar title as mine. I read it, but didn't get a full explanation. Is it a method of swapping/replacing data in GPU memory with texture data in CPU memory?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!