• Content count

  • Joined

  • Last visited

Community Reputation

1892 Excellent

About GuyWithBeard

  • Rank

Personal Information

  • Interests
  1. Hi, I want to add a volumetric light effect to my spotslights and because of this I want to do raymarching in a cone volume (the spotlight cone). Is there a handy way for me to distribute rays (basically get a list of origins and directions) evenly inside the spotlight cone? Say I start with 5 rays where the algorithm would give me one ray straight forward and four rays at the edge of the cone (up, down, left and right), but if I increase the number of rays the algorithm would give me more and more rays, essentially giving me a higher detail cone volume. The algorithm would not have to be real-time as I can very well precalculate the rays based on some quality setting at startup but it would still be nice to not have to calculate them by hand. Any tips?
  2. That green VR logo is the logo of the Finnish railway company. Just thought you should know
  3. When you are doing instancing your input layout might look something like this (from Frank Luna's DX11 book): const D3D11_INPUT_ELEMENT_DESC InputLayoutDesc::InstancedBasic32[8] = { {"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0}, { "WORLD", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1 }, { "WORLD", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 16, D3D11_INPUT_PER_INSTANCE_DATA, 1 }, { "WORLD", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 32, D3D11_INPUT_PER_INSTANCE_DATA, 1 }, { "WORLD", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 48, D3D11_INPUT_PER_INSTANCE_DATA, 1 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 64, D3D11_INPUT_PER_INSTANCE_DATA, 1 } }; My question is, what semantic names can I give to the instanced data? Here Frank Luna is using "WORLD" but I don't see that mentioned on any HLSL semantic reference pages. Also, what if I want to pass a view-projection matrix as part of the instanced data? Can I use "VIEWPROJ"? Does it matter? Does the runtime use the semantics at all? Cheers!
  4. They DO ignore them because they are not interested. However, it can still be a problem if a person writes to Random Game Developer LLC about this awesome game mechanic they just came up with and ask the developer to hire them as a designer. Later, if Random Game Developer LLC releases a game that happens to use a mechanic that is similar to the one the person wrote to them about, he may be angry and write mean things on the internet about it. Most game developers just don't want to deal with these things at all, so they would prefer if people did not send them ideas. A quick google search gave me this, for example: "The purpose of this policy is to avoid potential misunderstandings or disputes if products, services or features developed or published by Ubisoft might appear to be similar or identical to ideas that may have independently occurred to you."
  5. I'll just add that many game development companies actively discourage you from sending them game ideas. For legal reasons they cannot even open emails that look like they contain game ideas. The reason is that someone might try to take them to court for stealing their ideas if a game they are developing happens to contain ideas or mechanics sent to them from an outside person.
  6. DX11 Basic HLSL texture question

    Yeah, sorry if I was being unclear. You can definitely swap them around, just not on the HLSL side. You have to do it on the host side.
  7. DX11 Basic HLSL texture question

    The first parameter to PSSetShaderResource() and PSSetSamplers() is the start slot (if the number of resources to bind is one, the only slot) to bind to the pipeline. In your case, if you wanted tex0 to refer to textureB you would call "context->PSSetShaderResource(0, 1, textureB);". And, to be more specific, AFAIK you can't "switch them around" on the HLSL side. When you write "Texture2D tex0 : register(t0);" you specify that "tex0" will refer to the texture register with index 0. Then, it is up to you to bind the correct SRVs to that register on the C++ side. You can then change whatever you want to bind to that register, by calling XSSetShaderResource() with different start slots.
  8. Absolutely try enabling the debug layer and check the Visual Studio output (or whatever you are using) for warnings or errors. I have recently added DX12 support to my engine and the debug output helped me out a lot. If that does not help, run the app with RenderDoc and see that you transform the vertices correctly.
  9. Typeless formats in Vulkan

    I know, I just desperately tried to find some Vulkan formats that would equal the ones I originally used. I guess I am just gonna have to try and see how it goes. As a fallback I can make it work like I am used to in DX11/12 and provide some custom logic to map the same view as an SRV on the Vulkan side. Thanks for your insights, and please let me know if you figure it out as part of your own Vulkan work
  10. Typeless formats in Vulkan

    I apologize, you answered my question perfectly. I guess I just asked the wrong question. To give you a bit of background, I have a common API that wraps, among others, DX12 and Vulkan and I am wondering how to render to a texture and later use it as an SRV in a way that works for both APIs.
  11. Typeless formats in Vulkan

    Oh, that's interesting. Still, it does not really answer my question. One reason you might want a typeless format for a texture in DirectX is that you are planning to create two different views to the texture, each with their own formats, essentially "casting" the texture data into the type of the view you are currently using. For example, if I have a depth buffer with the type VK_FORMAT_D24_UNORM_S8_UINT and I write the depth to it using a view of the same type, can I create a another view for SRV use with the format VK_FORMAT_B8G8R8A8_UNORM to that same texture, and then simply only use the RGB components? Or do I have to pass the original view to the shader? I am only asking because AFAIK DX requires me to create two views with fully specified formats and the texture with a compatible typeless format and I would benefit from being able to do the same on Vulkan (except for the typeless part as that is not a thing on Vulkan).
  12. Typeless formats in Vulkan

    I am surprised no-one seems to know this. Let me rephrase the question a bit. In Vulkan, if you want to render to a texture and then later use it as a shader input, what format should you use in the texture itself, and which formats should you use in the descriptors/views? For the sake of argument, let's say the texture is R8G8B8A8.
  13. In D3D, if you want to render depth information to a texture and later use that texture as input to a shader you might create a texture with the format DXGI_FORMAT_R24G8_TYPELESS. You would then create two views to the texture, eg. one view with format DXGI_FORMAT_D24_UNORM_S8_UINT for rendering the depth and one SRV with format DXGI_FORMAT_R24_UNORM_X8_TYPELESS for sampling from the texture. How would one go about doing this in Vulkan since VkFormat does not seem to contain any typeless formats?
  14. Thanks guys, this is all really good stuff. Currently I am still working on wrapping the APIs, (DX11, DX12 and Vulkan) under a common interface. DX11 and Vulkan are now both rendering my GUI and the next piece of work is to get DX12 to that point. My plan is to rewrite large parts of the high-level renderer to make better use of the GPU, but leave other parts as-is for now, eg. the GUI and debug rendering. It would be nice to go the route of allocating larger buffers and offsetting based on the frame, but for now I am using a pool, ala Ryan_001's suggestion, where I can acquire temporary buffers and command buffers. The buffers as still as small as they used to be, there are just more of them. This is probably not the most performant way, but it gets the job done. Regarding the "full stall" I actually had to implement something like that already for shutdown (ie. you want to wait until all GPU work is done before destroying resources) and for swap chain recreations. In Vulkan this is easy, you can just do: void RenderDeviceVulkan::waitUntilDeviceIdle() { vkDeviceWaitIdle(mDevice); } However, I am a little confused about how to do that on DX12. This is what I have come up with but it has not been tested yet. What do you think? void RenderDevice12::waitUntilDeviceIdle() { mCommandQueue->Signal(mFullStallFence.Get(), ++mFullStallFenceValue); if(mFullStallFence->GetCompletedValue() < mFullStallFenceValue) { HANDLE eventHandle = CreateEventEx(nullptr, false, false, EVENT_ALL_ACCESS); mFullStallFence->SetEventOnCompletion(mFullStallFenceValue, eventHandle); WaitForSingleObject(eventHandle, INFINITE); CloseHandle(eventHandle); } } That would obviously only stall the one queue, but I think that might be enough for now. Is there an easier way to wait until the GPU has finished all work on DX12? Cheers!