Jump to content
  • Advertisement

ChuckNovice

Member
  • Content count

    51
  • Joined

  • Last visited

  • Days Won

    1

ChuckNovice last won the day on August 7

ChuckNovice had the most liked content!

Community Reputation

126 Neutral

About ChuckNovice

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

1431 profile views
  1. ChuckNovice

    what is Texture Streaming?

    I don't think it's anything more than the concept of loading textures as you navigate in your world and discover new models / terrain / whatever that need to be loaded. "Asset streaming" is a broader term that isn't only specific to textures.
  2. ChuckNovice

    DirectX 12 Root Constant Buffer Problem

    ajmiles is right, you can use the visual studio graphic debugger to have a better idea on what happened with your root parameter. Here's a screenshot of where to look at. I'd suggest playing around with that debugger a bit and get used to all it's features, especially if you're learning DX12. It will save you a lot of time in the future. Otherwise we will need much more than the pseudo-code to help you figure what went wrong.
  3. ChuckNovice

    CubeMap Conversion

    Hello, Back in my old DX11 engine I wrote a little tool to do similar things because as you've probably also noticed, most cubemap that we can download/buy on internet come in a format that is not convenient for a GPU. There was also no efficient tools that I found who could do such conversion without painful limitations. For example most tools were old and would just crash as soon as you try to convert an image that is above a certain resolution. All I did is write a simple compute shader (my engine already had everything needed to easily just write a shader and get going). The compute shader was compiled on the fly before launching a conversion with proper pre-compiler instructions to make it execute the right code path for the type of conversion I was about to do. Compute shaders are very efficient for this IMO. For example the compute shader could take a 15000x8000 panoramic image as input and project it to the outputs as a 2048x2048 texture array consisting of the 6 faces of a cubemap using very simple maths. After that process, my converter staged the texture from the GPU to the CPU and saved it in DDS format using my DDS library in the format that I wanted. Surprisingly even for such huge textures the compute shader was able to convert all that in about 10ms even on HDR images that had 16/32 bits per channel. The GPU also make it easy to convert from or to any format (8bpp / 32bpp / 64bpp / 128bpp) as you manipulate normalized pixel values. I can hardly think of a more efficient way to do such conversion. My source images weren't unwrapped cubemap like in your original post but rather panoramic/equilateral like this one : However the math should still be simple for your scenario. Here's a quick sample of the panoramic -> 6 faces part of my old compute shader : //----------------------------------------------------------------------- // Convert from equilateral to cubemap. //----------------------------------------------------------------------- #if WARPING_TECHNIQUE == EQUILATERAL_TO_CUBEMAP float3 normal; // find the normal of the current output pixel in a cubemap. [branch] if (dispatchThreadID.z == 0) { normal = normalize(float3(0.5f, (1.0f - sampleCoordinates.y) - 0.5f, (1.0f - sampleCoordinates.x) - 0.5f)); } else if (dispatchThreadID.z == 1) { normal = normalize(float3(-0.5f, (1.0f - sampleCoordinates.y) - 0.5f, sampleCoordinates.x - 0.5f)); } else if (dispatchThreadID.z == 2) { normal = normalize(float3(sampleCoordinates.x - 0.5f, 0.5f, sampleCoordinates.y - 0.5f)); } else if (dispatchThreadID.z == 3) { normal = normalize(float3(sampleCoordinates.x - 0.5f, -0.5f, (1.0f - sampleCoordinates.y) - 0.5f)); } else if (dispatchThreadID.z == 4) { normal = normalize(float3(sampleCoordinates.x - 0.5f, (1.0f - sampleCoordinates.y) - 0.5f, 0.5f)); } else if (dispatchThreadID.z == 5) { normal = normalize(float3((1.0f - sampleCoordinates.x) - 0.5f, (1.0f - sampleCoordinates.y) - 0.5f, -0.5f)); } // calculate the latitude from the y value. float latitude = acos(normal.y); // create two 3d vector to find out the longitude. float3 reference = float3(1.0f, 0.0f, 0.0f); float3 longitudeNormal; // pointing directly up or down. if (normal.x == 0.0f && normal.z == 0.0f) { longitudeNormal = normalize(float3(1.0f, 0.0f, 0.0f)); } else { longitudeNormal = normalize(float3(normal.x, 0.0f, normal.z)); } // calculate the edge for the cosine law. float arcDistance = distance(longitudeNormal, reference); // calculate the longitude. float longitude = acos((2.0f - pow(arcDistance, 2.0f)) / 2.0f); if (normal.z < 0.0f) { longitude = PI2 - longitude; } // get the sampling coordinates from the gathered angles. sampleCoordinates.x = longitude / PI2; sampleCoordinates.y = latitude / PI; #endif Also, assuming that the texture we want to save as DDS has no mipmaps, it was only a single memcpy instruction for me to grab all the image data once each face were converted into their own individual texture. So it's one way to make more efficient and you would also be able to implement more type of cubemap conversion that involves complex warping in the future.
  4. I never used their library so I took a quick look at https://github.com/Microsoft/DirectXTK/blob/master/Src/DDSTextureLoader.cpp The loader do consider cube maps if the flags are properly set in the DDS file's header. You don't seem to be the one who have the responsibility to specify whether it's a cube map or not : case D3D11_RESOURCE_DIMENSION_TEXTURE2D: if (d3d10ext->miscFlag & D3D11_RESOURCE_MISC_TEXTURECUBE) { arraySize *= 6; isCubeMap = true; } depth = 1; break; The shader resource view is later created with consideration for D3D11_SRV_DIMENSION_TEXTURECUBE by using that same boolean : if (isCubeMap) { if (arraySize > 6) { SRVDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBEARRAY; SRVDesc.TextureCubeArray.MipLevels = (!mipCount) ? -1 : desc.MipLevels; // Earlier we set arraySize to (NumCubes * 6) SRVDesc.TextureCubeArray.NumCubes = static_cast<UINT>(arraySize / 6); } else { SRVDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; SRVDesc.TextureCube.MipLevels = (!mipCount) ? -1 : desc.MipLevels; } } So far the problem seems to be your DDS file. Perhaps it is saved as a plain array of 2d texture and is missing few flags to be treated as a cube map? Only if you share the DDS file we could check.
  5. You will indeed need to craft a diffuse / normal texture at the very minimum. Perhaps also a specular / roughness map. For the rest it depends how far you want to push it. Subsurface scattering is usually a nice effect to consider for human flesh :
  6. Hello, from what I understand you use that "glow map" specifically for your bloom effect. This pass can actually be mixed to the normal process instead of spending bandwidth on writing a whole screen of pixel to another buffer. In my project my scene color buffer is HDR (R16G16B16A16), I'm pretty sure you already have that if you're going PBR. It is basically the resulting scene color after lights / shadows have been calculated. Let's take that one as example : To later perform the bloom effect, I downsample that HDR map by keeping only pixel that have a luminance above a certain threshold and the rest is discarded. That threshold work the same way as the one you see in common engine's bloom effect (UE4/Unity). So we end up with a result that could look like this : The downsampled map is blurred : And finally merged back to the scene at the beginning of my post-processing stack. So basically the same map is used until we start working with the downsampled version instead of outputting to two different buffer at full resolution when we already have the information in the original one. You can render your lasers on the same map as the rest of your scene doing this as long as you give them enough luminance to be grabbed by the bloom's threshold.
  7. ChuckNovice

    CBVs in Descriptor Heap?

    I re-wrote big chunks of code in that area, did some refactoring, double-checked my stuff and it's now working. I most likely had the root signature of a previous pass bound at that point which would match what you said. On top of that my call to CreateConstantBufferView had a wrong BufferLocation. Thanks for the answer, I wanted to be sure before spending few hours on this. Thanks, the usage of root / table is clearer with that explanation. While refactoring my view manager I kept your post into consideration and made it possible to specify whether any view should be used in a table or directly in the root upon creation. Both ways return an object that encapsulate the necessary binding information of the view in a IConstantBufferView / IShaderResourceView / IUnorderedAccessView like the DX11 way. I can then further validate those information against the root signature when binding if I want. To bind a descriptor table I force the programmer to pre-create a "ResourceBindingState" by specifying an array of IResourceView (IConstantBufferView / IShaderResourceView / IUnorderedAccessView). I use that object to know which views need to be copied contiguously in the shader visible heap and store information on how to bind it later (GPU pointer and such). I can now concentrate on finding a solution for the heap fragmentation since I want to make it possible to keep my ResourceBindingState alive as long as the views don't change. Really thanks for your contribution on the subject.
  8. ChuckNovice

    CBVs in Descriptor Heap?

    With my previous version of the code which gets this error my root parameter 0 is in fact a DescriptorRange of CBV and not a root descriptor. Are you 100% sure that it is what the error means? The error only mention a "parameter" which still could be anything between a root descriptor and a descriptor table since both of them allow specifying that it's for a CBV. I will definitely investigate this further meanwhile in case a brain fart made me not notice it.
  9. I finally started tackling a proper management of the views in my DX12 project. So far I implemented a ring buffer to manage a big offline and online descriptor heap to which I copy descriptors in a contiguous way as required by my resource binding (I already have plan to improve on the ring buffer idea to manage fragmentation a little better as I don't want to copy descriptors on every draw call if they are already copied contiguously from a previous call). This concept has been well explained by @Hodgman in this comment : For SRV / UAV I have no problem so far, everything work just as I understood the concept. For CBV I apparently cannot reference them with a descriptor table in the root signature. The debug layer report an error saying that a descriptor table is invalid for a CBV root signature parameter even if my root signature was indeed created as a CBV descriptor table at this location so until this question is clear to me I moved all my CBV directly in a root descriptor which seem to be the only option left. The reported error is this one : D3D12 ERROR: CGraphicsCommandList::SetGraphicsRootDescriptorTable: The currently set root signature declares parameter [0] with type CBV, so it is invalid to set a descriptor table here. [ EXECUTION ERROR #708: SET_DESCRIPTOR_TABLE_INVALID] Now as I noticed, binding a CBV as root descriptor don't even require the CBV to be in a descriptor heap and you don't even need to call ID3D12Device::CreateConstantBufferView at all. You simply pass the GPU virtual address of the resource to ID3D12GraphicsCommandList::SetGraphicsRootConstantBufferView and it all work without complaining. I'm a bit confused because from how I understand it so far a descriptor heap should only be used when you are going to work with descriptor tables. Why would the type D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV even exist and allow CBVs in it if CBVs can only be referenced as a root descriptor? I'm also wondering why the ID3D12Device::CreateConstantBufferView call exists at all in that case. There's obviously few details that I didn't catch. Could someone enlighten me on this. Thanks
  10. I may be wrong but I think they accounted for the fact that the smallest unit possible is not necessarily always a pixel or a uniform grid. Take for example MSAA and the way they distribute the samples position inside a pixel. From how I understand it each of these sample would be a "fragment" :
  11. On this line : g_d3dDeviceContext->VSGetConstantBuffers(0, NumConstantBuffers, g_d3dConstantBuffers); You are basically overwriting all the buffers that you previously created yourself here : hr = g_d3dDevice->CreateBuffer(&cBufferDesc, nullptr, &g_d3dConstantBuffers[bufferID]); On first glance that would be the cause of the error. Are you sure you didn't intent to use VSSetConstantBuffers instead of VSGetConstantBuffers ?
  12. ChuckNovice

    Failing to create shader - help

    To leverage the 11.3 feature level you actually have to create your 11.0 device as you actually do and then query the ID3D11Device3 interface on it. The sample at this link make use of it : https://github.com/walbourn/directx-sdk-samples/blob/master/DXUT/Core/DXUT.cpp #ifdef USE_DIRECT3D11_3 // Direct3D 11.3 { ID3D11Device3* pd3d11Device3 = nullptr; hr = pd3d11Device->QueryInterface( IID_PPV_ARGS(&pd3d11Device3) ); if (SUCCEEDED(hr) && pd3d11Device3) { GetDXUTState().SetD3D11Device3(pd3d11Device3); ID3D11DeviceContext3* pd3dImmediateContext3 = nullptr; hr = pd3dImmediateContext->QueryInterface(IID_PPV_ARGS(&pd3dImmediateContext3)); if (SUCCEEDED(hr) && pd3dImmediateContext3) { GetDXUTState().SetD3D11DeviceContext3(pd3dImmediateContext3); } } } #endif As you can see the pd3d11Device is the equivalent of your 11.0 device and the ID3D11Device3 interface is retrieved from it. ID3D11Device3 exposes all the new 11.3 methods that weren't there before. Here's the microsoft reference to that interface : https://msdn.microsoft.com/en-us/library/windows/desktop/dn899218(v=vs.85).aspx Also don't be too surprised if the website that you linked mention that it support DX12 and some other feature level while it doesn't. It's a relatively old card after all : https://forums.geforce.com/default/topic/1001222/nvidia-gtx-480-dx-12-/
  13. ChuckNovice

    Failing to create shader - help

    Hello, From this page : https://msdn.microsoft.com/en-us/library/windows/desktop/dn933277(v=vs.85).aspx "HLSL Shader Model 5.1, introduced with D3D12 and D3D11.3." You need to create your device with at least a DirectX 11.3 feature level to use shader model 5.1. You are currently on feature level 11.0
  14. ChuckNovice

    How to use DirectX Control Panel

    Hello, It does work on windows 10 on my machine. I used it few times to force the debug layer on. You see that Edit list button that you have focused on your screenshot? You must first click it and add at least one path to target your .exe. It is necessary to specify this path so you don't force the debug layer on everything DX related. People would forget that on and lag big time when actually playing a game. Once you've added a path everything will become available :
  15. Hello, I am also using SharpDX but in the context of DX12. In my case I completely dropped native fullscreen support due to all the hassle I had with the swapchain changing state by itself without warning while there is still work queued that use the swapchain's buffer (when minimizing the application for example) and few more problems. However this is how I initially did it : - Create the swapchain normally as you did. Make sure you give it the SwapChainFlags.AllowModeSwitch flag. - Call the factory's MakeWindowAssociation method - Call mySwapChain.SetFullscreenState() to make it go fullscreen. - Resize the swapchain buffers to match the screen resolution. This resource should also help you : https://stackoverflow.com/questions/25369231/properly-handling-alt-enter-alt-tab-fullscreen-resolution Especially those part where I found out the hard way that they were actually true : I can see few way to get rid of the stretching other than going in exclusive mode. I'd first need to know how you expect it to look. Is a 1024x768 image centered in the middle of your 1920x1080 screen the end result you're looking for?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!