Jump to content
  • Advertisement

WFP

GDNet+
  • Content Count

    123
  • Joined

  • Last visited

Community Reputation

2787 Excellent

About WFP

  • Rank
    Member

Personal Information

  • Website
  • Role
    Business Development
    Programmer
    Technical Director
  • Interests
    Programming

Social

  • Twitter
    @willp_tweets

Recent Profile Visitors

8907 profile views
  1. Thanks! I'd seen the MSDN page (https://docs.microsoft.com/en-us/windows/desktop/direct3d12/hardware-support) before, but it's great to have an additional resource to turn to for referencing actual hardware support. That combined with Steam's hardware survey seems like a really handy combination. And yeah, I probably should've specified that by bindless I was more specifically referring to SRVs. I can see it as useful for CBVs and UAVs, but so far haven't had issue working even within their low tier limits.
  2. Thanks for the reply, that's a helpful way to think about it. I've actually been working on a way to remove the concept of a descriptor heap directly, and instead submit ResourceBinding structs (several at a time as a list), and have some metadata that's stored at root signature and PSO creation time sort out the right place for them to land. It helps keep the API a little smaller/cleaner, and also lets me do a little bit of basic validation to ensure the root signature (pipeline layout for Vulkan) matches the incoming bindings. It seems like this is going to be an even better approach since it can also obscure the underlying details of descriptor pools/sets. I'll keep at it and see how it goes. And thanks for the tip regarding the draw ID 😃
  3. Thank you, that is helpful! I'm fairly certain bindless is a safe bet for the other console, as well, since it's supposed to be Tegra X1-based, and bindless textures are specifically called out on this page: https://shield.nvidia.com/blog/nextgenx1.
  4. Main question: How are you all nowadays abstracting resource binding between D3D12 and Vulkan? The graphics backend for my engine was initially written in Direct3D 12, and the abstraction layer very closely mimicked that with the goal of being as thin as possible (many calls basically amount to passthroughs). I've recently started porting my graphics backend to Vulkan. I've actually found this to be very straightforward for the majority of the effort, the two largest differences of course being render target/framebuffer management and resource binding. Of those two topics, resource binding is the main area I have hang-ups on for best abstraction and implementation, and that is where I'm hoping for advice. I've got this working "well-enough", which is to say I can bind resources and they get where they need to go, but the setup I have feels brittle, for lack of a better word, and I consider it to be very much in a rough draft type of state. I've read a few posts here on the topic, but many are a few years old at this point. I'm sure best practices have evolved over time as people become more accustomed to working with both APIs. For example, this post is almost three years old to the date (https://www.gamedev.net/forums/topic/678860-descriptor-binding-dx12-and-vulkan/). Secondary question: Mostly out of curiosity, how universal is support for bindless textures? More specifically, if the targets are PC and current generation consoles, can a bindless model be fully adopted, or are there any platforms in this set that do not support it? Last question: What are your thoughts on a root constant/push constant versus a draw ID vertex buffer? I implemented this type of side-along vertex buffer a long while back in the engine's D3D11 days in order to bind one large constant buffer and index into it with an instance offset at draw time, and have honestly just not revisited its necessity until I reviewed it while making notes before I started working on the Vulkan port. Basically, I bind the actual vertex buffer and additionally to a known slot bind a draw ID vertex buffer that is just incrementing 0-4095 where the index is accessed through the startInstanceLocation parameter of draw calls. It certainly works well enough and keeps the number of setConstantBufferView-type calls very low, but it seems a root/push constant would achieve the same outcome without the additional vertex buffer being bound. Honestly, I should probably just profile and see, but figured I'd ask for general consensus while I was here Thanks for your time!
  5. WFP

    Copy pointer to aiScene

    I haven't read through the Assimp internals or documentation in a while, but from what you're describing, I'm guessing that your aiScene pointer becomes invalid when the Assimp::Importer goes out of scope. If that's the case, you'll need to store the Assimp::Importer somewhere until you're done using it.
  6. Try running it with the -srgb flag.
  7. The root signature is what controls visibility and register assignment in D3D12. In your specific case for constant buffers, the D3D12_ROOT_DESCRIPTOR contains a ShaderRegister variable for you to fill out that will match what you type in your shader code as cbuffer MyCB: register(b#). For example, if you set ShaderRegister to 5, then you would declare cbuffer MyCB: register(b5). The D3D12_ROOT_PARAMETER (of which the root descriptor is a unioned member of) has a ShaderVisibility member that controls which stages of the pipeline the parameter is visible to. You can either set the visibility to ALL, or to individual stages and use duplicate (with different stage) entries in the root signature. See here: https://msdn.microsoft.com/en-us/library/windows/desktop/dn913202(v=vs.85).aspx#visibility In your case, you can give your shared constant buffers ALL visibility and set them once using SetGraphicsRootConstantBufferView, or you can give your shared constant buffers duplicate entries with specific visibilities (VERTEX and PIXEL) and set them each with separate calls to SetGraphicsRootConstantBufferView. The link above mentions this, but some hardware handles ALL differently and is less efficient than setting individual stages, while other hardware will have no extra cost for setting ALL. Keep in mind general best practices for root signatures when deciding on a route - keep the root signature as small as possible, with items that change more frequently towards the beginning. Root constant descriptors take 2 DWORDS each, so can start to grow your root signature quickly if you're not mindful. More documentation that you might find helpful: https://msdn.microsoft.com/en-us/library/windows/desktop/dn859357(v=vs.85).aspx https://msdn.microsoft.com/en-us/library/windows/desktop/dn986747(v=vs.85).aspx https://msdn.microsoft.com/en-us/library/windows/desktop/dn879476(v=vs.85).aspx https://msdn.microsoft.com/en-us/library/windows/desktop/dn879482(v=vs.85).aspx
  8. Yes to the first part, not exactly (but close) to the bulleted question. If you can swing it, you should consider using the same or similar allocators for the same command lists. For example, let's say you're doing deferred rendering and are drawing some number of items. Your pass to fill the gbuffer may take (for example's sake) 200 calls to various functions on the command list, which are written to the allocator. Your shadowmap pass has to be run for each shadow-casting light, so you'll have somewhere in the neighborhood of 200 * numLights entries in the command list, and therefore that allocator. If in a subsequent frame, you use the allocator you initially used for the gbuffer pass for the shadow map pass, that allocator will grow to the worst-case scenario size, i.e., the size required by the shadowmap pass. If instead you keep that allocator designated for gbuffer stuff only, it'll only be sized enough for your worst-case gbuffer pass, so could save on your overall memory budget. To answer whether you need to know the size of the command list before calling CreateCommandList, not precisely, but it's helpful to have some idea of how the command list will be used so you have a ballpark idea of how it will affect the allocator bound to it. Keep them to a minimum, but don't dig yourself into a hole. You can reuse an existing command list to write into a separate allocator than the one you initially created it with while the initial allocator's content is being consumed by the GPU. You still must set and track your fence values to be sure that the GPU is done with the allocator you're about to reset and reuse. You should create enough to cover your maximum latency, but if it comes down to it, you still must be able to block and wait for the GPU to finish it's work if all the command allocators are occupied. This is like the pool of allocators @galop1n mentioned in his post.
  9. Any chance the working directory is being changed? Can you post your log / crash output here? Could you add a log statement before compiling the shaders (in the loadAndCompileShaderFile function) to print out the working directory?
  10. WFP

    The "Sky" is too short!

    You could use them, or keep your sky dome and in the vertex shader do something like: //... some code posH = mul(inputPos, viewProjMtx); // this is your SV_Position output posH.z = posH.w; // when the divide by w occurs, this will make the depth 1.0 (far plane) //... whatever else You could also just use a full-screen triangle or quad and use the same "push it to the far plane" logic. Then in the pixel shader, color it based off the y component of a view ray (a ray from starting at the camera and extending out into the scene) - all depends on how fancy you want to get. If you stick with the dome, make sure you only transform it by the camera's position, and not rotation - otherwise the entire sky will spin as the camera spins.
  11. WFP

    'Shader resource view' problem

    This warning usually means that the resource you're using in **SetShaderResources is bound as a render target. The D3D11 runtime will remove it for you and give you a warning, but for correctness (warning-free output) you should either overwrite what's bound to the output merger before setting the shader resources OMSetRenderTargets(1, &newRTV, nullptr); or clear what's currently bound by setting it to null: OMSetRenderTargets(0, nullptr, nullptr);
  12. WFP

    The "Sky" is too short!

    To expand just a bit on Kylotan's accurate response - in your specific application, what does a "unit" mean? In other words, is the value "1 unit" representative of 1 meter, 1 centimeter, or some other value altogether? If 1 unit = 1 cm, then a sky 1000 units away would be 10 meters away. Alternatively, if your app takes place entirely on the ground and the sky is supposed to look infinitely far away, you could always force the geometry to the far plane in the vertex shader and have its position transform with the camera position to give the illusion that it's so far away it doesn't move when the camera moves a bit.
  13. I won't have time to do a full install and all that, but if you can grab a capture with RenderDoc (or a few captures with different deficiencies showing), I might be able to step through the shader that way and see what could be going on. Can't promise when I'll have a lot of focused time to sit down with it, but I'll try to as soon as I can.
  14. Admittedly, I've done very little with stereo rendering, but perhaps this offset needs to be accounted for in your vertex shader? float4 stereo = StereoParams.Load(0); float separation = stereo.x * (leftEye ? -1 : 1); float convergence = stereo.y; viewPosition.x += separation * convergence * inverseProj._m00; The view ray you create in the vertex shader may need to be adjusted similarly to align correctly. Just kinda guessing at the moment
  15. For thoroughness's sake, would you mind switching them back to non-negated versions and seeing if that helps? Could you also test with the original rayLength code? Does the game use a right- or left-handed coordinate system?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!