Search the Community

Showing results for tags 'DX12'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 208 results

  1. Dynamic resource reloading

    Making editors is a pain. I have a list of thousands of items I'd rather do than this - yet I made myself a promise to drag at least one full featured editor tool over the finish line. There are few reasons for that: I believe I have quite useful engine, it was my pet project all these years, it went through many transformations and stages - and solid tool is something like a goal I'd like to do with it, to make it something better than "just a framework". I'm very patient person, and I believe also hard working one. Throughout the years my goal is to make a game on my own engine (note, I've made games with other engines and I've used my engine for multiple non-game projects so far -> it eventually branched to be a full-featured commercial project in past few years). I've made few attempts but mostly was stopped by lacking such tool - that would allow me to build scenes and levels, in an easy way. And the most important one ... I consider tools one of the hardest part in making any larger project, so it is something like a challenge for me. Anyways so much for motivation, the tool is progressing well - it can be used to assemble scene so far, various entities (like light or materials) can have their properties (components) modified, with full undo/redo system of course. And so the next big part was ahead of me - asset loading and dynamic reloading. So here are the results: Engine editor and texture editor before my work on the texture. And then I worked on the texture: And after I used my highly professional programmer-art skills to modify the texture! All credits for GameDev.net logo go to its author! Yes, it's working. The whole system needs a bit of cleanup - but in short this is how it works: All textures are managed by Manager<Texture> class instance, this one is defined in Editor class There is a thread waiting for change on hard drive with ReadDirectoryChangesW Upon change in directory (or subdirectories), DirectoryTree class instance is notified. It updates view in bottom left (which is just a directory-file structure for watched directory and subdirectories), and also for modified/new files creates or reloads records in Manager<Texture> class instance (on Editor level) The trick is, reloading the records can only be done while they're not in use (so some clever synchronization needs to be done) I might write out some interesting information or even short article on this. Implementing it was quite a pain, but it's finally done. Now short cleanup - and towards the next one on my editor todo list! Thanks for reading & see you around!
  2. Hi, New here. I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. Thanks, Dejay Hextrix
  3. As far as I know, the size of XMMATRIX must be 64 bytes, which is way too big to be returned by a function. However, DirectXMath functions do return this struct. I suppose this has something to do with the SIMD optimization. Should I return this huge struct from my own functions or should I pass it by a reference or pointer? This question will look silly to you if you know how SIMD works, but I don't.
  4. I am looking for some example projects and tutorials using sharpDX, in particular DX12 examples using sharpDX. I have only found a few. Among them the porting of Microsoft's D3D12 Hello World examples (https://github.com/RobyDX/SharpDX_D3D12HelloWorld), and Johan Falk's tutorials (http://www.johanfalk.eu/). For instance, I would like to see an example how to use multisampling, and debugging using sharpDX DX12. Let me know if you have any useful examples. Thanks!
  5. I'm writing a 3D engine using SharpDX and DX12. It takes a handle to a System.Windows.Forms.Control for drawing onto. This handle is used when creating the swapchain (it's set as the OutputHandle in the SwapChainDescription). After rendering I want to give up this control to another renderer (for instance a GDI renderer), so I dispose various objects, among them the swapchain. However, no other renderer seem to be able to draw on this control after my DX12 renderer has used it. I see no exceptions or strange behaviour when debugging the other renderers trying to draw, except that nothing gets drawn to the area. If I then switch back to my DX12 renderer it can still draw to the control, but no other renderers seem to be able to. If I don't use my DX12 renderer, then I am able to switch between other renderers with no problem. My DX12 renderer is clearly messing up something in the control somehow, but what could I be doing wrong with just SharpDX calls? I read a tip about not disposing when in fullscreen mode, but I don't use fullscreen so it can't be that. Anyway, my question is, how do I properly release this handle to my control so that others can draw to it later? Disposing things doesn't seem to be enough.
  6. I am confused why this code works because the lights array is not 16 bytes aligned. struct Light { float4 position; float radius; float intensity; // How does this work without adding // uint _pad0, _pad1; }; cbuffer lightData : register(b0) { uint lightCount; uint _pad0; uint _pad1; uint _pad2; // Shouldn't the shader be not able to read the second element in the light struct // Because after float intensity, we need 8 more bytes to make it 16 byte aligned? Light lights[NUM_LIGHTS]; } This has erased everything I thought I knew about constant buffer alignment. Any explanation will help clear my head. Thank you
  7. I don't know in advance the total number of textures my app will be using. I wanted to use this approach but it turned out to be impractical because D3D11 hardware may not allow binding more than 128 SRVs to the shaders. Next I decided to keep all the texture SRV's in a default heap that is invisible to the shaders, and when I need to render a texture I would copy its SRV from the invisible heap to another heap that is bound to the pixel shader, but this also seems impractical because ID3D12Device::CopyDescriptorsSimple cannot be used in a command list. It executes immediately when it is called. I would need to close, execute and reset the command list every time I need to switch the texture. What is the correct way to do this?
  8. I'm currently learning how to store multiple objects in a single vertex buffer for efficiency reasons. So far I have a cube and pyramid rendered using ID3D12GraphicsCommandList::DrawIndexedInstanced; but when the screen is drawn, I can't see the pyramid because it is drawn inside the cube. I'm told to "Use the world transformation matrix so that the box and pyramid are disjoint in world space". Can anyone give insight on how this is accomplished? First I init the verts in Local Space std::array<VPosData, 13> vertices = { //Cube VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }), VPosData({ XMFLOAT3(-1.0f, +1.0f, -1.0f) }), VPosData({ XMFLOAT3(+1.0f, +1.0f, -1.0f) }), VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }), VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }), VPosData({ XMFLOAT3(-1.0f, +1.0f, +1.0f) }), VPosData({ XMFLOAT3(+1.0f, +1.0f, +1.0f) }), VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }), //Pyramid VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }), VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }), VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }), VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }), VPosData({ XMFLOAT3(0.0f, +1.0f, 0.0f) }) } Then data is stored into a container so sub meshes can be drawn individually SubmeshGeometry submesh; submesh.IndexCount = (UINT)indices.size(); submesh.StartIndexLocation = 0; submesh.BaseVertexLocation = 0; SubmeshGeometry pyramid; pyramid.IndexCount = (UINT)indices.size(); pyramid.StartIndexLocation = 36; pyramid.BaseVertexLocation = 8; mBoxGeo->DrawArgs["box"] = submesh; mBoxGeo->DrawArgs["pyramid"] = pyramid; Objects are drawn mCommandList->DrawIndexedInstanced( mBoxGeo->DrawArgs["box"].IndexCount, 1, 0, 0, 0); mCommandList->DrawIndexedInstanced( mBoxGeo->DrawArgs["pyramid"].IndexCount, 1, 36, 8, 0); Vertex Shader cbuffer cbPerObject : register(b0) { float4x4 gWorldViewProj; }; struct VertexIn { float3 PosL : POSITION; float4 Color : COLOR; }; struct VertexOut { float4 PosH : SV_POSITION; float4 Color : COLOR; }; VertexOut VS(VertexIn vin) { VertexOut vout; // Transform to homogeneous clip space. vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj); // Just pass vertex color into the pixel shader. vout.Color = vin.Color; return vout; } float4 PS(VertexOut pin) : SV_Target { return pin.Color; }
  9. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. It also supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. The engine contains shader source code converter that allows shaders authored in HLSL to be translated to GLSL. The engine currently supports Direct3D11, Direct3D12, and OpenGL/GLES on Win32, Universal Windows and Android platforms. API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Build Instructions Please visit this page for detailed build instructions. Samples The engine contains two graphics samples that demonstrate how the API can be used. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. It can also be thought of as Diligent Engine’s “Hello World” example. Atmospheric scattering sample is a more advanced one. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The engine also includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  10. Can someone help out with this. The code builds but I get "Exception thrown: Read Access Violation" and says my index buffer was a nullptr. I'm going to attach my code and a screenshot of the error below or above. Any help is greatly appreciated. //------------------------------- //ヘッダファイル //------------------------------- #include "manager.h" #include "renderer.h" #include "dome.h" #include "camera.h" //------------------------------- //コンストラクタ //------------------------------- CDome::CDome() { m_pIndxBuff = nullptr; m_pVtxBuff = nullptr; m_HorizontalGrid = NULL; m_VerticalGrid = NULL; // ワールドの位置・拡大・回転を設定 m_Scale = D3DXVECTOR3(1.0f, 1.0f, 1.0f); m_Pos = D3DXVECTOR3(0.0f, 0.0f, 0.0f); //m_Rotate = 0.0f; } CDome::CDome(int HorizontalGrid, int VerticalGrid, float Length) { m_pIndxBuff = nullptr; m_pVtxBuff = nullptr; m_HorizontalGrid = HorizontalGrid; m_VerticalGrid = VerticalGrid; // ワールドの位置・拡大・回転を設定 m_Scale = D3DXVECTOR3(1.0f, 1.0f, 1.0f); m_Pos = D3DXVECTOR3(0.0f, 0.0f, 0.0f); m_Length = Length; } CDome::CDome(int HorizontalGrid, int VerticalGrid, float Length, D3DXVECTOR3 Pos) { m_pIndxBuff = nullptr; m_pVtxBuff = nullptr; m_HorizontalGrid = HorizontalGrid; m_VerticalGrid = VerticalGrid; // ワールドの位置・拡大・回転を設定 m_Scale = D3DXVECTOR3(1.0f, 1.0f, 1.0f); m_Pos = Pos; m_Length = Length; } //------------------------------- //デストラクタ //------------------------------- CDome::~CDome() { } //------------------------------- //初期化処理 //------------------------------- void CDome::Init(void) { LPDIRECT3DDEVICE9 pDevice; pDevice = CManager::GetRenderer()->GetDevice(); m_VtxNum = (m_HorizontalGrid + 1) * (m_VerticalGrid + 1); m_IndxNum = (m_HorizontalGrid * 2 + 2) * m_VerticalGrid + (m_VerticalGrid - 1) * 2; // テクスチャの生成 if (FAILED(D3DXCreateTextureFromFile(pDevice, "data/TEXTURE/dome.jpg", &m_pTexture))) { MessageBox(NULL, "Couldn't read Texture file destination", "Error Loading Texture", MB_OK | MB_ICONHAND); } //頂点バッファの作成 if (FAILED(pDevice->CreateVertexBuffer(sizeof(VERTEX_3D) * m_VtxNum, D3DUSAGE_WRITEONLY, FVF_VERTEX_3D, D3DPOOL_MANAGED, &m_pVtxBuff, NULL))) //作成した頂点バッファのサイズ { MessageBox(NULL, "Error making VertexBuffer", "Error", MB_OK); } //インデクスバッファの作成 if (FAILED(pDevice->CreateIndexBuffer(sizeof(VERTEX_3D) * m_IndxNum, D3DUSAGE_WRITEONLY, D3DFMT_INDEX16, D3DPOOL_MANAGED, &m_pIndxBuff, NULL))) { MessageBox(NULL, "Error making IndexBuffer", "Error", MB_OK); } VERTEX_3D *pVtx; //仮想アドレス用ポインターVertex WORD *pIndx; //仮想アドレス用ポインターIndex //頂点バッファをロックして仮想アドレスを取得する。 m_pVtxBuff->Lock(0, 0, (void**)&pVtx, 0); //インデクスをロックして仮想アドレスを取得する。 m_pIndxBuff->Lock(0, 0, (void**)&pIndx, 0); for (int i = 0; i < (m_VerticalGrid + 1); i++) { for (int j = 0; j < (m_HorizontalGrid + 1); j++) { pVtx[0].pos = D3DXVECTOR3(m_Length * sinf(i * (D3DX_PI * 0.5f / ((int)m_VerticalGrid - 1))) * sinf(j * (D3DX_PI * 2 / ((int)m_HorizontalGrid - 1))), m_Length * cosf(i * (D3DX_PI * 0.5f / ((int)m_VerticalGrid - 1))), m_Length * sinf(i * (D3DX_PI* 0.5f / ((int)m_VerticalGrid - 1))) * cosf(j * (D3DX_PI * 2 / ((int)m_HorizontalGrid - 1)))); D3DXVECTOR3 tempNormalize; D3DXVec3Normalize(&tempNormalize, &pVtx[0].pos); pVtx[0].normal = -tempNormalize; pVtx[0].color = D3DXCOLOR(255, 255, 255, 255); pVtx[0].tex = D3DXVECTOR2((float)j / (m_HorizontalGrid - 1), (float)i / (m_VerticalGrid - 1)); pVtx++; } } for (int i = 0; i < m_VerticalGrid; i++) { if (i != 0) { pIndx[0] = ((m_HorizontalGrid + 1) * (i + 1)); pIndx++; } for (int j = 0; j < (m_HorizontalGrid + 1); j++) { pIndx[0] = ((m_HorizontalGrid + 1) * (i + 1)) + j; pIndx[1] = ((m_HorizontalGrid + 1) * i) + j; pIndx += 2; } if (i + 1 != m_VerticalGrid) { pIndx[0] = pIndx[-1]; pIndx++; } } //インデクスをアンロックする m_pIndxBuff->Unlock(); //頂点バッファをアンロックする m_pVtxBuff->Unlock(); } //------------------------------- //終了処理 //------------------------------- void CDome::Uninit(void) { // 頂点バッファの破棄 SAFE_RELEASE(m_pVtxBuff); // インデクスの破棄 SAFE_RELEASE(m_pIndxBuff); Release(); } //------------------------------- //更新処理 //------------------------------- void CDome::Update(void) { m_Pos = CManager::GetCamera()->GetCameraPosEye(); } //------------------------------- //描画処理 //------------------------------- void CDome::Draw(void) { LPDIRECT3DDEVICE9 pDevice; pDevice = CManager::GetRenderer()->GetDevice(); D3DXMATRIX mtxWorld; D3DXMATRIX mtxPos; D3DXMATRIX mtxScale; D3DXMATRIX mtxRotation; // ワールドID D3DXMatrixIdentity(&mtxWorld); // 3D拡大行列 D3DXMatrixScaling(&mtxScale, m_Scale.x, m_Scale.y, m_Scale.z); D3DXMatrixMultiply(&mtxWorld, &mtxWorld, &mtxScale); // 3D平行移動行列 D3DXMatrixTranslation(&mtxPos, m_Pos.x, m_Pos.y + 70.0f, m_Pos.z); D3DXMatrixMultiply(&mtxWorld, &mtxWorld, &mtxPos); // ワールド座標変換 pDevice->SetTransform(D3DTS_WORLD, &mtxWorld); // 頂点バッファをデータストリームに設定 pDevice->SetStreamSource(0, m_pVtxBuff, 0, sizeof(VERTEX_3D)); // 頂点フォーマットの設定 pDevice->SetFVF(FVF_VERTEX_3D); // テクスチャの設定 pDevice->SetTexture(0, m_pTexture); // インデクスの設定 pDevice->SetIndices(m_pIndxBuff); // カラーが見えるようにライトを消す pDevice->SetRenderState(D3DRS_LIGHTING, FALSE); // ポリゴンの描画 pDevice->DrawIndexedPrimitive(D3DPT_TRIANGLESTRIP, 0, 0, m_VtxNum, 0, m_IndxNum - 2); // ライトを元に戻す pDevice->SetRenderState(D3DRS_LIGHTING, TRUE); } //------------------------------- //Create MeshDome //------------------------------- CDome *CDome::Create(int HorizontalGrid, int VerticalGrid, float Length) { CDome *pMeshDome; pMeshDome = new CDome(HorizontalGrid, VerticalGrid, Length); pMeshDome->Init(); return pMeshDome; } CDome *CDome::Create(int HorizontalGrid, int VerticalGrid, float Length, D3DXVECTOR3 Pos) { CDome *pMeshDome; pMeshDome = new CDome(HorizontalGrid, VerticalGrid, Length, Pos); pMeshDome->Init(); return pMeshDome; }
  11. I hope this is in the right forum. I'm new to this community, but I've been programming engines for many years as a hobby. I'm now writing a simple Direct3D 12 game engine, with audio and multithreading support, for Windows (GNU GPL license). I know there are several ready-to-use engines out there, but my goal is not to compete with Unity or any of the others. I mainly want to explore the issues involved with creating these engines, and that's why I'm posting about it here. I also want to help beginners, so this can be used as a learning tool, everyone feel free to copy code (I'll be posting it on Sourceforge, and put sample output on YouTube). Also, if anyone asks me to add a feature to the engine, I will try to implement it. Finally, I want to become familiar with the terminology and industry practices involved, since I am self-taught. Most of the engine is already planned out, but I want to write some code before divulging the plan to show that I am capable of creating this. I'm going to finish this post and write some initialization code. If anyone has any comments or ideas, please reply, otherwise I'll be back with a YouTube video. --237
  12. How do you view the assembly code of your hlsl code? I am using visual studio 2017 and an nvidia graphics card. I tried looking in the visual studio graphics analyzer ("debug - graphics - start graphics debugging" from visual studio opens that) but where I would expect it to be it says "Enable GPU disassembly via View->Options->Graphics Diagnostics->Enable gather of GPU Disassembly. The vsglog must be re-opened for this change to take effect." However that isnt an option you can select. I would actually prefer a way to view the asm without having to capture a frame, as my program doesnt do well in the visual studio graphics analyzer When I try to playback a captured frame I get "An error has occured. Playback of your application may be incomplete (HRESULT = 0x00630000)" However I am able to play back captured frames of the microsoft samples, but I am still not able to see asm code with them.
  13. In DirectX 9 I would use this input layout: { 0, 0, D3DDECLTYPE_SHORT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 } with this vertex shader slot: float4 Position : POSITION0 That is, I would use the vertex buffer format SHORT4 for corresponding float4 in the shader and everything would work great. In DirectX 12 this does not work. When I use the format DXGI_FORMAT_R16G16B16A16_SINT with float4 in the shader, I get all zeros in the shader. If I use int4 in the shader instead of float4, I get numbers in the shader but they are messed up. I can't figure out exactly what is wrong with them because I can't see them. The shader debugger of visual studio keeps crashing. The debugger layer does not say anything when I use int4, but it gives a warning when I use float4. How can I use the R16G16B16A16_SINT input layout?
  14. I was searching for it in google but didn't find, I found only about running the program I can if updated drivers. I'm new in DX11, so I want to ask whether DX12 coding is differs from DX11 or only it has better rendering?
  15. Hi everyone, I porting a project from DX11 to DX12. - In Dx11,we can set the constant buffer for each shader by using : VSSetConstantBuffer, PSSetConstantBuffer. But in Dx12 I am using SetGraphicsRootConstantBufferView, as I know it will set constant buffer for both VS and PS, am I right? - For example, my project is using 5 constant buffers: A,B,C,D,E. The VS use 3 const buffer A,B,C and PS use A, D,E - So how to port it to DX12 correctly ? SetGraphicsRootConstantBufferView(0 -> 4, A -> E) seems ok, but would PS know how to match D E in right position? Thank you
  16. I know some people will find this question stupid, but it is not stupid if you don't know how SetPipelineState exactly works. If I switch the current PSO to another one with different shaders and root signatures, will that be more expensive than switching to a PSO with the same shaders and root signatures as the current PSO? In other words: does switching the shaders and root signatures add to the overhead of switching the PSO?
  17. I've had a D3D12 implementaiton sitting in the background for a long time, but I only periodically check on it to see if it's still working (after windows/driver/etc updates). Today I checked it for the first time in many months, and ID3D12GraphicsCommandList::ResourceBarrier is causing validation errors: D3D12 ERROR: ID3D12CommandList::ResourceBarrier: Before state (0x10: D3D12_RESOURCE_STATE_DEPTH_WRITE) of resource (0x00000279584EFB40:'depth') (subresource: 0) specified by transition barrier does not match with the state (0x80: D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE) specified in the previous call to ResourceBarrier [ RESOURCE_MANIPULATION ERROR #527: RESOURCE_BARRIER_BEFORE_AFTER_MISMATCH] Six months ago my code was working fine without any errors, and now I get these errors (and if I ignore them, I get corrupt rendering that looks like cache invalidation/flush issues -- bad transitions)... This is probably due to windows/d3d validation getting better and catching bugs in my code that it wasn't before... but... What I'm wondering is how is that particular function able to know the previous state of a resource during command recording? For example, let's say I have one resource that starts in state "A", and I: Record command list #1: Swap from "B" to "C". Record command list #2: Swap from "C" to "A". Record command list #3: Swap from "A" to "B". Then let's say I execute list #3, then list #1, then list #2 This will correctly transition from "A"->"B"->"C"->"A"... but during recording, it looked like we were going to incorrectly do B->C (from A), C->A (from C), A->B (from A)... It seems like I should get RESOURCE_BARRIER_BEFORE_AFTER_MISMATCH errors only when I execute a command list, not when I record one... Am I wrong??
  18. Hi, Is anyone running the 1709 Windows build? ever since I updated, the call to D3D12GetDebugInterface no longer works with an error of "This method requires the D3D12 SDK Layers for Windows 10, but they are not present on the system." on previous Windows builds I'd install the optional graphics tool package but that doesn't appear to exist any more.
  19. EDIT: I have now solved my problem after trail and error Hi All, I'm following Frank Luna's book, Introduction to 3D Game Programming with Directx 12, and have come to the point where there is an exercise to create a cube using 2 Vertex buffers; one for position, and one for color. I'm then supposed to bind these to different Input slots(0 and 1). All is fine except my color data seems to be misinterpreted by the pipeline, no matter what color I specify to the vertices in my buffer, the cube will still be rendered as a predetermined assortment of colors. Am I missing anything important to achieving this goal? these are the steps I took so far: Used Two vertex buffers to feed pipeline with vertices struct VPosData {XMFLOAT3 Pos; }; struct VColorData {XMFLOAT4 Color; }; Changed INPUT_ELEMENT_DESC accordingly mInputLayout = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 } }; Bound the two vertex buffers to the input slots mCommandList->IASetVertexBuffers(0, 1, &mBoxGeo->VertexBufferView()); //Input Slot 0 for verts mCommandList->IASetVertexBuffers(1, 1, &mBoxGeo->VertexBufferView()); //Input Slot 1 for Color The Textbook doesn't specify any extra steps to achieve this result, so at this point I'm not sure how to go about debugging this problem. Any feedback would be appreciated.
  20. Game Engine Editor

    So, I'm slowly progressing with my hobby project. This is the first time I write something about the project directly tied to it, or more likely essential part of it. I'll share few details here about the editor development. Making useful game/engine editor that is easy to control is definitely a long-run task. While the engine itself is updated to support Direct3D 12 currently, there was no real editor that could be at least a bit generic. For my current goal with this project, I decided to start with editor and work from there. So where am I at? I'm already satisfied with some of the basic tasks - selection system, transformations, editing scenegraph tree (re-assigning object elsewhere - through drag&drop, etc.), undo-redo system (yet with adding more features this needs to grow), I'm definitely not satisfied with way I handle rotations edited through input fields. Component editing is currently work-in-progress (you can see prototype on the screenshot), and definitely needs add/delete buttons. It is not properly connected with undo-redo system yet, but it works. So what are problems with components? They're not finished, few basic were thrown together for basic scenes, but I'm not satisfied with them (F.e. lighting & shadowing system is going to get overhauled while I do this work) Undo/redo tends to be tricky on them, as each action needs to be reversible (F.e. changing light type means deletion of current Light-deriving class instance and creation of new different Light-deriving class instance) Selecting textures/meshes/..., basically anything from library, requires a library (which has no UI as of now!) Clearly the component system has advantages and disadvantages. The short term plan is: Update lighting & shadowing system Add library (that it makes sense!) with textures, meshes and other data - make those drag&drop into components Add a way to copy/paste selection Add a way to add/remove components on entities Add save/load for the scene in editor format Alright, end of my short pause - time to continue!
  21. I'm doing some mesh skinning in my vertex shader, and I have a depth pre-pass in place. I'm not crazy about doing all the vertex transformation twice for the skinning; is there a way to have one vertex shader be the input to multiple pixel shaders? Or do I have to do a vertex shader stream-output, and then set that as the input for the pixel shader somehow?
  22. I am working on a VR project where we have two devices. One for the renderer belonging to the game engine and the other used to present the textures to the VR screens. We ported both the game engine renderer and the VR renderer to DirectX12 recently. I haven't seen any examples of sharing textures across devices in DirectX12. Microsoft has an example on cross adapter sharing but we are only dealing with one GPU. Can we create a shared heap for two devices like we do for two adapters? Is there a way to do async copy between two devices? If async copy is possible, it would be ideal since we already have designed our engine along the lines of taking the most advantage of async copy and compute. Any guidance on this will really help to reduce the texture transfer overhead. Thank you
  23. Hi Guys, Does anyone know how to grab a video frame on to DX texture easily just using Windows SDK? or just play video on DX texture easily without using 3rd party library? I know during DX9 ages, there is a DirectShow library to use (though very hard to use). After a brief search, it seems most game dev settled down with Bink and leave all hobbyist dx programmer struggling.... Having so much fun play with Metal video playback (super easy setup just with AVKit, and you can grab movie frame to your metal texture), I feel there must be a similar easy path for video playback on dx12 but I failed to find it. Maybe I missed something? Thanks in advance for anyone who could give me some path to follow
  24. Hello guys, I have a texture of format DXGI_FORMAT_B8G8R8A8_UNORM_SRGB. Is there a way to create shader resource view for the texture so that I could read it as RGBA from the shader instead of reading it specifically as BGRA? I would like all the textures to be read as RGBA. Tx
  25. Hello guys, I am wondering why D3D12 resource size has type UINT64 while resource view size is limited to UINT32. typedef struct D3D12_RESOURCE_DESC { … UINT64 Width; … } D3D12_RESOURCE_DESC; Vertex buffer view can be described in UINT32 types. typedef struct D3D12_VERTEX_BUFFER_VIEW { D3D12_GPU_VIRTUAL_ADDRESS BufferLocation; UINT SizeInBytes; UINT StrideInBytes; } D3D12_VERTEX_BUFFER_VIEW; For the buffer we can specify offset for the first element as UINT64 but the buffer view should still be defined in UINT32 terms. typedef struct D3D12_BUFFER_SRV { UINT64 FirstElement; UINT NumElements; UINT StructureByteStride; D3D12_BUFFER_SRV_FLAGS Flags; } D3D12_BUFFER_SRV; Does it really mean that we can create, for instance, structured buffer of floats having MAX_UNIT64 elements (MAX_UNIT64 * sizeof(float) in byte size) but are not be able to create shader resource view which will enclose it completely since we are limited by UINT range? Is there a specific reason for this? HLSL is restricted to UINT32 values. Calling function GetDimensions() on the resource of UINT64 size will not be able to produce valid values. I guess, it could be one of the reasons. Thanks!