Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About Mercesa

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

1802 profile views
  • gsc

  1. The CPP Vulkan API already does exception throwing when there is no successful result, and yes the debug layer is enabled. And I am sure it "works" is because of the fact my whole scene renders as usual, without any visual problems of any kind. Also tested it on a friend's pc. Also works perfectly fine, while it should not. I am expecting it to completely break down, but it just allocates descriptor sets. I have done a clean, rebuild on my project as well to make doubly sure I am not missing something.
  2. As the title says, I am explicitly creating a too small descriptor pool, which should NOT support the resources I am going to allocate from it. std::array<vk::DescriptorPoolSize, 3> type_count; // Initialize our pool with these values type_count[0].type = vk::DescriptorType::eCombinedImageSampler; type_count[0].descriptorCount = 0; type_count[1].type = vk::DescriptorType::eSampler; type_count[1].descriptorCount = 0; type_count[2].type = vk::DescriptorType::eUniformBuffer; type_count[2].descriptorCount = 0; vk::DescriptorPoolCreateInfo createInfo = vk::DescriptorPoolCreateInfo() .setPNext(nullptr) .setMaxSets(iMaxSets) .setPoolSizeCount(type_count.size()) .setPPoolSizes(type_count.data()); pool = aDevice.createDescriptorPool(createInfo); I have an allocation function which looks like this, I am allocating a uniform, image-combined sampler and a regular sampler. Though if my pool is empty this should not work? vk::DescriptorSetAllocateInfo alloc_info[1] = {}; alloc_info[0].pNext = NULL; alloc_info[0].setDescriptorPool(pool); alloc_info[0].setDescriptorSetCount(iNumToAllocate); alloc_info[0].setPSetLayouts(&iDescriptorLayouts); std::vector<vk::DescriptorSet> tDescriptors; tDescriptors.resize(iNumToAllocate); iDevice.allocateDescriptorSets(alloc_info, tDescriptors.data());
  3. Very clear and concise answer Hodgeman, thanks! How about descriptor pools then? I know in Dx12 you have descriptor heaps, but Vulkan its descriptor pool system is a bit confusing. Are there any good resources on Vulkan descriptor set/pool management?
  4. When loading in a model with a lot of meshes that have different materials that contain different textures, how would you handle this in Vulkan? Is it possible to partially change a DescriptorSet with a WriteDescriptorSet object? Even if it is possible, it does not sound ideal to update the descriptor set for every mesh. I am aware of the boundless texture arrays in shader model 5.0+, but for now I want to keep it as simple as possible.
  5. It is a bit unclear to me for what kind of tasks you would want to create a new command buffer/how to use them. is it ideal to have a command buffer per draw call? Per material call? Per render-pass? I know in Dx12 command lists can have complete rendering pipelines recorded, but I am a bit unsure how to see command buffers in Vulkan.
  6. You're a hero MJP Thanks for clearing that up!
  7. Wow ok, I think I figured it out, debug mode yesterday gave me no errors and it suddenly says this. PSSetShaderResources: Resource being set to PS shader resource slot 6 is still bound on output! Forcing to NULL. Sometimes restarting visual studio performs miracles.. (also this is infuriating because why the hell would graphics debugging even show the texture being bound if this error was the case? The only problem now it keeps giving me this error even though I am explicitly setting the resource to 0 before using it. edit: fixed my problem by using the last post of this thread
  8. I have tried myTex[uint(svPosition) % textureSize] and I still end up with a black texture I am 100% sure the texture is bound to the pipeline since it shows up in my graphics debugging.
  9. I've read somewhere load only works if you have exactly matching coordinates which match the screen 1:1, not if you have a texture of 1/2 or 1/4th size of screen. I have attempted to use Load but I am not sure how to calculate the correct screen coordinates. Since if I do position/ float2(screenWidth, screenHeight) * (textureSizeX, textureSizeY) and use that as coordinates for load. It does not work. The link below stated this, though there was no reasoning stated for this, and after experimenting myself I also could not figure out how to use Load properly with a smaller texture. https://gamedev.stackexchange.com/questions/65845/difference-between-texture-load-and-texture-sample-methods-in-directx
  10. I wasn't aware you could access textures like that in a pixel shader, but it makes sense since it's possible in a compute shader. I'll try it out tomorrow and will make another post thanks
  11. I am currently trying working on implementing a paper which requires me to use a downsampled texture of type int2 of resolution (width/k, height/k). Where K is an unsigned integer k = (1, infinity) I need to sample this smaller int2 texture in my full screen pass and effectively sample it per window pixel, but I can not use texture.Sample as it is an int2 type AND because it is smaller than the screen size I have read I am not able to use texture.Load either. So in short: Need to use downsampled int2 texture in fullscreen rendering pass but I don't know how to sample this texture properly.
  12. And I am not sure if the command buffer count comes from SSAO, but what I do know is that SSAO takes up most of my performance (as you can see in the graph) and in those frames command buffer counts increase as well. Edit: And I think you are talking about cache misses from texture samples? And I don't really understand your mip map pyramid, I believe if you downsample a depth texture it does not really make sense anymore? Since it will linear interpolate between the values during downsampling? Edit2: I lowered my samplerate and framerate does improve a lot, so I guess the amount of samples attributes to too much random memory access which causes cache misses as you stated Link to scrnshot:
  13. Do you have any sources for this? Not that I don't trust you, but I can't find anything on this on google.
  14. Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds. When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again. This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas? In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface. Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results Count of command buffers goes from 4 (far away from surface) to ~20(close to surface). The command buffer duration in % goes from around ~30% to ~99% Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds. I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface? Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.
  15. Thanks a lot! I am using load now, but I also know when to use these different kinds of filtering modes now. Cheers! :D
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!