• Advertisement

ekba89

Member
  • Content count

    145
  • Joined

  • Last visited

Community Reputation

788 Good

About ekba89

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. I had a similar issue as well. It wasn't working when I run normally and was working when I launch from renderdoc. As mentioned, it was related to synchronization. It seems like renderdoc inserts some synchronization itself so changes the behavior.
  2. Further update this. I tried using semaphores so that each command buffer waits for the previous one and that worked fine which I was already expecting it to work. So for anyone that doesn't want to read the whole thing, the question is that do pipeline barriers work between multiple command buffers. For example I have command buffer A and command buffer B. A has some buffer updates and global memory barrier with src=BOTTOM_OF_PIPE and dest=TOP_OF_PIPE, and B has draw commands using those buffers. If I do vkQueueSubmit(A) and vkQueueSubmit(B), is it supposed to make all the commands on B wait for global memory barrier?
  3. Hi, I am trying to fix an issue I am seeing with my engine using Vulkan. I tested my code on 2 pcs and on one of them with debug build I get flicker while moving the camera which seems to be caused by accessing wrong constant buffer values. Considering it doesn't happen with both pcs and release build makes me believe that it is a synchronization issue. That said I made some tests which proved otherwise. To give some details about my code, I have rendering setup in a way that cpu can record up to 3 frames ahead of the gpu and then I wait with a fence. So all the resources are tripled and I access the proper one for each frame. Currently for my test case I have 3 render threads, each having their own command buffer. First thread just does vkCmdUpdateBuffer on the buffers that will be used for other command buffers. I have pipeline barriers with srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT and dstAccessMask = VK_ACCESS_UNIFORM_READ_BIT for each update(for now) to make sure it works. Second thread fills the command buffer for gbuffer rendering. And last thread fills the command buffer for lighting. At the beginning of the lighting command buffer there is a pipeline barrier with VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT, VK_ACCESS_SHADER_READ_BIT, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL to make sure all writes to it finished. And after light draw I just convert the gbuffer render targets their old format using VK_ACCESS_SHADER_READ_BIT, VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL. Once all the threads are done I submit each command buffer individually(I will get to this in a little bit), with the order I mentioned. 1-Update command buffer 2-Gbuffer command buffer 3-Light command buffer. So from what I understand from Vulkan docs pipeline barriers are creating dependency in the commands given to a single queue. So it shouldn't matter which command buffer does the barrier. In my case for example, using the buffer barrier with srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT, dstAccessMask = VK_ACCESS_UNIFORM_READ_BIT should make that buffer available for uniform read in the future commands even if they were in another command buffer. So to find the issue tried bunch of stuff. First tried getting capture with renderdoc but I couldn't reproduce the issue when I run my app with renderdoc. Seems like it is forcing the command to happen linearly. I am not sure if there are any other tool that does similar thing. Then I tried using vkQueueWaitIdle which solved the issue as I was expecting. And I kept the most interesting to the last :). Instead of submitting the command buffers individually, if I submit as batch with VkSubmitInfo.commandBufferCount = 3, I don't see the issue happening anymore. Also in a similar fashion instead of using separate command buffers with their own Begin and End, if I use 1 command buffer and just fill that command buffer with these commands in the order I mentioned, again I don't see the issue. Thanks in advance.
  4. vs 2013 hlsl

    I started to think that there is something wrong with that instead of me doing something wrong. We can possibly get the same behavior by using fxc with custom build step but it would be really convenient if it works.
  5. Hey. I am using Visual Studio 2013 and when I try to get assembly output by setting it in Properties in VS it gives me error: Element <AssemblerOutput> has an invalid value of "AssemblyCode". If I add /Fc [filepath] to the additional options in Properties then it outputs fine. I couldn't find anything online about this. Is there a solution for it?   Thanks
  6. I think the reason is you are not adding D3D11_CPU_ACCESS_WRITE flag to your constant buffer. Since you are writing data from CPU you need to use that flag.
  7. Yes. And if you start with the tutorial in that web page it starts with a tutorial to how to link it properly. Also I forgot to mention but Frank Luna's books are great for Direct X reference, it tells you most of the stuff that you need to know with detail. http://www.d3dcoder.net/ this is his web page and you can buy the book depending on the version of Direct X you want to learn. I am assuming you are going for Direct X 11.
  8. Hey.   1- I haven't tried Code::Blocks so I can't say anything about that but I am using Visual Studio and I am pretty happy with it. Also you might need Windows SDK or Direct X sdk. Windows SDK is newer but you need Windows 7 or 8. 2- If you really want to learn what is going on with Direct X I recommend you to create your own small applications. Since most of the game engines hide the complexities from the user it is hard to learn really what is going on by learning a game engine. 3-http://www.rastertek.com/
  9. What you can do is create a wrapper class for the D3D10_RASTERIZER_DESC which sets everything to its default value. Then you can change only the values you want. So you don't have to type that all the time. In my case I created a wrapper class with D3D11_RASTERIZER_DESC(I'm using DirectX 11) as its private member. But you can also create a struct or class that inherits from D3D10_RASTERIZER_DESC.
  10. It makes sense that different shaders have different constant buffers. One trick that you can do if you are trying to get some global variables is you can create a header file. // MyGlobalShaderHeader.hlsli cbuffer globalEveryFrame : register(b0) { matrix viewMatrix; float4 elapsedTime; // size needs to be multiple of 16 anyways } And lets say you are using this for vertex shader. Then you can set it once in the beginning of rendering deviceContext->VSSetConstantBuffers(0, 1, &myBuffer); Now you can use it for all the vertex shaders that include that header. Only thing you need to be careful about is you have to make sure that you don't change register 0 between shaders that need those global variables.   So if you are trying to create a global buffer you can use that trick. But if those variables are shader specific I think it is better idea to have different constant buffers for different shaders. It is much more manageable that way. So every material can know which buffers they are responsible from.   One side not though as I wrote in the comment above constant buffers are automatically become multiple of 16 bytes. So lets say you put 1 float2 and 1 float it occupies float4 space. So when you are setting your bytewidth it needs to be multiple of 16 bytes otherwise you will get an error.
  11. vs 5_0 texture sampling

    Yeah that fixed my problem thanks for the help.
  12. vs 5_0 texture sampling

    Thanks for the answer. Yeah I need linear filtering so I will try changing my texture to r32g32b32a32.
  13. I have a height map in my vertex shader that I use to move the vertex to right height. I use r32g32b32 texture. But when I create graphics device with debug flag I get the error below. Even though it shows me error everything works fine I mean I get the right height for the terrain. So do I need to change the format of my texture or is there a way to sample r32g32b32 textures on vertex shader? D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The Shader Resource View in slot 1 of the Vertex Shader unit is using the Format (R32G32B32_FLOAT). This format does not support 'Sample', 'SampleLevel', 'SampleBias' or 'SampleGrad', at least one of which may being used on the Resource by the shader. The exception is if the corresponding Sampler object is configured for point filtering (in which case this error can be ignored). This also only applies if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #371: DEVICE_DRAW_RESOURCE_FORMAT_SAMPLE_UNSUPPORTED]
  14. Your vector should be like [ 1.0 ] [ 2.0 ] [ 0.0 ] [ 1.0 ] since you are using row major and your vector should be on the left side of your matrix when you are multiplying. (V x M).   Yes but you should take scaling into consideration so you need to normalize each axis.   Your result vector's x component is effected by first column of the matrix, y is effected by second column and so on. And as a result you get another row vector. I suggest you to read some tutorials about matrices if you are having trouble with multiplication before starting to get deeper since you will need a good understanding for the basics.  
  15. 3D model render sequence

      You might have different render states for different models. So you have to change them depending on what you are trying to do. But for the normal 3D rendering default settings should suffice. If all your models are using default render states you just need to set them once in the beginning of your application. But as I said SpriteBatch changes it before it does its 2D rendering. So if you are using SpriteBatch and some 3D models you need to reset your render states every time before you render your 3D models.
  • Advertisement