Jump to content
  • Advertisement

Killeak

Member
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

269 Neutral

About Killeak

  • Rank
    Member
  1. Look at your vertex shader params and the input layout:    You are using float4 for position and color in the vertex shader, but the inputlayout says it's float3.
  2. Sure, but it's not like they build a separate i5 chip for desktop without hyperthreading. I'd honestly expect that i5's and i3's are just the lower-binned i7 parts with cores, hyperthreading and or cache disabled.   What's interesting is that they don't lock the hyperthreading out on the dual-core i5's that go in laptops. Too much of a performance degradation relative to the competition?   If you have a real quad core i5 (that is if is a desktop I5, or a Skylake non U mobile i5), it doesn't have HT, but if it's a dual core one, it does. All i3 are dual core and also have HT. All i7 (no matter the core count) also have HT. So, i5 always have 4 logical cores, but it can be 2 cores with HT, or 4 cores without HT. Also, for most parts, dual core chips are very different from quad core ones. So no, an i3 is not an i7 with some cores disabled. The chip is much more smaller. However this is true for quad core i5 and i7s (they disable some parts of cache and HT).  My recommendation as a programmer, profile profile profile (as usual)! That being said, your game/engine should be able scale, if you handle this, then having different setups per CPU type is not that hard (you can request CPU information about physical and logical cores for example and setup based on that information).
  3. Killeak

    DirectX 12 problems creating swap chain

    One thing, instead of a device you need to pass a ID3D12CommandQueue to the SwapChain creation So, instead of this IDXGISwapChain *pSwapChain = NULL; hr = pDxgiFactory->CreateSwapChain(pDevice, &swapChainDesc,&pSwapChain); You need this   ID3D12CommandQueue* d3dQueue = ...; hr = pDxgiFactory->CreateSwapChain(d3dQueue, &swapChainDesc,&pSwapChain); Also, there is no default command queue, you need to create one first
  4. 11on12 API handles that. Sadly I am not sure how much of this is still under NDA, since public Docs (https://msdn.microsoft.com/en-us/library/dn913195(v=vs.85).aspx) doesn't seems to have much info.
  5. Typically mapped memory is uncached, and so writes will bypass the cache completely. For these cases write combining is used to batch memory accesses. You still need to issue a CPU instruction that will flush the write combining buffer. It's very unlikely to see this bug occur, as this buffer will almost certainly flush itself out before next frame anyway... but on my splash screens on the (new) console APIs, I had graphics corruption that was fixed by putting a CPU write fence instruction right after the code that streamed textures into mapped (uncached, write-combined) memory. In that simple situation, I'd managed to create a scenario where the GPU wad reading the texture data with such low latency, and the CPU was being so lazy in evicting pixels from the write-combine buffer, that the splash screens were missing blocks of colour for a few frames. If there's no D3D12 function for this, maybe you're just supposed to be aware of the HW memory model and flush it yourself like I did?? If so, the easiest way to do it on x86(-64) is to use any instruction with the LOCK prefix, which on Microsoft's compiler pretty much means to use any of the Interlocked* family of functions, or the MemoryBarrier macro. Alternatively, you could use the now standard C++ atomic_thread_fence(memory_order_release) function. That sounds much more sane, actually... [edit] Actually, on x86, atomic_thread_fence(memory_order_release) doesn't actually generate any instructions (it's only a compile-time directive)... so you'd atually need to use atomic_thread_fence(memory_order_seq_cst)... even though that's a stronger fence in theory than what is needed    When you create a heap or a commited resource, you need to specify the cpu page type in the heap desc struct (options are: unknown, Not available, write combine or write back).  https://msdn.microsoft.com/en-us/library/windows/desktop/dn770353(v=vs.85).aspx https://msdn.microsoft.com/en-us/library/windows/desktop/dn770373(v=vs.85).aspx https://msdn.microsoft.com/en-us/library/windows/desktop/dn899178(v=vs.85).aspx
  6. For buffers you can do something like this...   { ID3D12Device* d3dDevice; ID3D12GraphicsCommandList* d3dCommandList; ID3D12Resource* d3dBuffer = NULL; ID3D12Resource* d3dBufferUpload = NULL; uint32 dataSize = stride * count; HRESULT hr = E_FAIL; hr = d3dDevice->CreateCommittedResource(&CD3D12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), D3D12_HEAP_MISC_NONE, &CD3D12_RESOURCE_DESC::Buffer(dataSize), D3D12_RESOURCE_USAGE_INITIAL, // D3D12_RESOURCE_USAGE_COPY_DEST nullptr, __uuidof(ID3D12Resource), (void**)&d3dBuffer); // schedule a copy to get data in the buffer hr = d3dDevice->CreateCommittedResource(&CD3D12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), D3D12_HEAP_MISC_NONE, &CD3D12_RESOURCE_DESC::Buffer(dataSize), D3D12_RESOURCE_USAGE_GENERIC_READ, // D3D12_RESOURCE_USAGE_COPY_DEST nullptr, __uuidof(ID3D12Resource), (void**)&d3dBufferUpload); // copy data to the intermediate upload heap and then schedule a copy from the upload heap to the default texture D3D12_SUBRESOURCE_DATA bufResource = {}; bufResource.pData = initData; bufResource.RowPitch = dataSize; bufResource.SlicePitch = bufResource.RowPitch; SetResourceBarrier(d3dCommandList, d3dBuffer, D3D12_RESOURCE_USAGE_INITIAL, D3D12_RESOURCE_USAGE_COPY_DEST); UpdateSubresources<1>(d3dCommandList, d3dBuffer, d3dBufferUpload, 0, 0, 1, &bufResource); SetResourceBarrier(d3dCommandList, d3dBuffer, D3D12_RESOURCE_USAGE_COPY_DEST, D3D12_RESOURCE_USAGE_GENERIC_READ); } Where SetResourceBarrier is void SetResourceBarrier(ID3D12GraphicsCommandList* d3dCommandList, ID3D12Resource* resource, UINT StateBefore, UINT StateAfter, D3D12_RESOURCE_TRANSITION_BARRIER_FLAGS flags = D3D12_RESOURCE_TRANSITION_BARRIER_NONE) { D3D12_RESOURCE_BARRIER_DESC barrierDesc = {}; barrierDesc.Type = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION; barrierDesc.Transition.pResource = resource; barrierDesc.Transition.Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES; barrierDesc.Transition.StateBefore = StateBefore; barrierDesc.Transition.StateAfter = StateAfter; barrierDesc.Transition.Flags = flags; d3dCommandList->ResourceBarrier(1, &barrierDesc); } UpdateSubresources<1>(...) is defined in d3d12.h At some point you may want to create your default resources in a particular heap instead of using CreateCommitedResource, but that's an optimization. 
  7. In D3D12 you have 3 memory pools for resources:   Default: Same as Default in D3D11, lives in GPU memory Upload: is in main ram but is visible for the GPU to read and the CPU to write, so this is the replacement for Dynamic in D3D11, but you should also use it for uploading stuff like you did in D3D11 with staging. Readback: This is for copying data from GPU to CPU (for example to take a screenshot). It's slow! So, if you want to create a texture, index buffer or vertex buffer that lives in GPU memory, you need to upload the data to a Upload buffer first, then copy it to the default buffer.
  8.   Hold on, what happens if you have more than one command queue?   You can have more command queues, for copy or compute for example, but I guess you should only have one for graphics, or at least one for that Swap Chain. Also, I guess this is because the resource state need to be validated when the present happens, and for execute anything you need a queue and not the device.
  9.   By any chance, did you pass a D3D12CommandQueue as first parameter for the SwapChain creation instead of a device? like... D3D12CommandQueue* d3dQueue = ...; HRESULT hr = dxgiFactory->CreateSwapChain(d3dQueue, &descSwapChain, &dxgiSwapChain); Because in the new D3D12 SDK, the first parameter should be a D3D12CommandQueue, instead of a ID3D12Device. It took me a couple of days to find out why I got that weird bug (but no warning nor error during the swap chain creation), but once I fixed that everything start to work fin :).
  10.   AMD GCN 1.0 and beyond (no VLIW4/5 GPUs) Intel Haswell and beyond (no Sandy/Ivy bridge GPUs) NVIDIA Kepler and beyond (support for Fermi GPUs is pending for a future driver release).   Hybrid notebook systems should not be currently supported (again, support is pending for a future driver release).     Just to clarify the hybrid graphics issues, it can happen on desktop too if you have your integrated gpu enabled. So if you have, for example, an intel i5/i7 with a hd4600 and a Geforce 770, you can only use the intel gpu to create a d3d12 device. If you try with the discrete gpu it will fail. So in order to use your discrete gpu, you need to disable the integrated GPU first (from bios for example). The problem is that on laptops, it may be impossible to do. I have a 980m and I can only use the HD4600 that comes with the CPU :P 
  11. Wouldn't that actually justify the purpose of Vulkan?  One cross-platform API that will work regardless of the OS, more or less the same purpose as with OpenGL.  This would be in contrast to it simply being intended to fill the platform gap, providing a high performance 3D graphics API on platforms that don't have a native one.  I'd be highly disappointed if the latter purpose were all that the Vulkan designers ever hoped to achieve.   The problem is that Google is the one that controls Android, and is not an open environment like Windows for the user (at least for most users), in the sense that you can't install drivers like you do on Windows, the drivers comes with the device and its updates (which for worse in most cases are under control of the carriers).   Sure, if you install Cyanogen Mod or some other custom version you can do whatever you want, but for the normal user, they are stuck with whats comes with the device, which means that if Google decides to not implement Vulkan, you can't use Vulkan for Android and you are forced to use their API (so is worse than MS with DX vs OpenGL, since in Windows at least you can always install a driver that implements the latest OpenGL).   The thing is, these days Google is the new MS and Android the new Windows, they have the biggest portion of market and they are in a position where they can do whatever they want. This is worse case scenario, I don't think Google will do this, but is a possibility that worries.   But to be honest, I don't care if I have to implement one or two more APIs, since we already support a bunch (DX11, OpenGL 3.x, 4.x, ES 2.x, ES3.x, and we have an early implementation for DX12 and we want to add support for PS4 as well). I prefer to have to support multiple strong and solid APIs than a bad one (OpenGL am looking at you).    Vulkan seems great but I still have reserves, the good thing is that is just like D3D12, so porting should be very easy (the only thing that I need now is a HLSL compiler to SPIR-V ).
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!