Jump to content
  • Advertisement

zmic

Member
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

1 Follower

About zmic

  • Rank
    Newbie

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for answering! In the meantime I stumbled on some kludge that works for whatever reason... the kludge being that I add 16 extra pixels to the size of the backbuffer on resize. I have no idea why this works, but the program only has to run on one machine so I'll take it... for now. When I have some more time I will look into it again. _iswapchain3->GetDesc(&desc); if (!minimized) { THROW_FAIL(_iswapchain3->ResizeBuffers(2, X + 16, Y + 16, desc.BufferDesc.Format, desc.Flags)); } Without the 16, it doesn't work. With the 16, it does. Yeah, really By the way the backbuffer is created with DXGI_FORMAT_R8G8B8A8_UNORM and my own buffer has the same format internally, so just I assumed they would be compatible that way.
  2. Thanks for your input Hodgman! You are right about the queuing. Those mutexes don't make a difference so I threw them out again. Yeah the debug flag is set and I think the state transitions are ok. Whenever there's something wrong with those barriers I get spammed instantly in the debug output.
  3. Thanks for your input! I was able to reproduce your suggestion... I can CopyResource a texture to the backbuffer (having the same size) and it works nicely in full-screen and windowed mode. However, the buffer that is calculated by the compute thread is not a texture buffer but a plain D3D12_RESOURCE_DIMENSION_BUFFER so I cannot CopyResource it -- debugger complains that backbuffer and compute buffer are of different type. I need to use CopyTextureRegion with the buffer "wrapped" inside a D3D12_TEXTURE_COPY_LOCATION structure. Maybe I can let the compute thread write into a texture buffer rather than a plain buffer.. gonna try that first.
  4. EDIT2: I removed the compute thread altogether. Still the same problem. So it's not some concurrency problem. I guess my question simply boils down to: I have a non-texture buffer with RGBA data on the GPU. How do I blit it to the backbuffer of the swapchain? (without running an actual 3D rendering pipeline, if possible),
  5. Hi programmers, I have this problem in DirectX 12. I have one thread that runs a compute shader in a loop. This compute shader generates a 2d image in a buffer (D3D12_HEAP_TYPE_DEFAULT, D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS). The buffer gets updated, say, a hundred times a second. In the main thread, I copy this buffer to the back buffer of the swap chain when WM_PAINT is called. WM_PAINT keeps getting called because I never do the BeginPaint/Endpaint. For this copy operation I use a graphical CommandQueue/CommandList. Here's the pseudo-code for this paint operation: ... reset CommandQueue/CommandList swapchain->GetBuffer(back_buffer) commandlist->CopyTextureRegion(back_buffer, 0, 0, 0, computed_buffer, nullptr); commandlist->ResourceBarrier( back_buffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_PRESENT)); ... execute CommandList and wait for finish using fence ... swapchain->Present(..) I use a good old CriticalSection to make sure the compute CommandList and the graphical CommandList don't run at the same time. When I start the program this runs fine in a normal window. I see the procedurally generated buffer animated in real time. However, when I switch to full screen (and resize the swapchain), nothing gets rendered. The screen stays black. When I leave fullscreen (again resize swapchain), same problem. The screen just stays black. Other than that the application runs stable. No directX warnings in the debug output, no nothing. I checked that the WM_PAINT messages keep coming and I checked that the compute thread keeps computing. Note that I don't do anything else with the graphical commandlist. I set no pipeline, or root signature because I have no 3d rendering to do. Can this be a problem? I suppose I could retrieve the computed buffer with a readback buffer and paint it with an ordinary GDI function, but that seems silly with the data already being on the GPU. EDIT: I ran the code on another PC and on there the window stays black right from the start. So the resizing doesn't seem to be the problem. Any ideas appreciated!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!