Sign in to follow this  
CirdanValen

Vulkan Row pitch when copying from VkBuffer to VkImage

Recommended Posts

I'm working through adding texturing to my program by following vulkan-tutorial.com and was wondering about row pitch. The tutorial states that images must obey the implementation's row pitch amount, which makes sense.

However, the question I'm running into is that I want to use a VkBuffer for staging rather than a separate staging VkImage. From what I can tell, the only way to get the needed row pitch is from the linear tiled staging image. I'm assuming the optimal tiling image will have a different row pitch, and I can't find anything stating either way.

Can I use the row pitch from the destination optimal tiling image, or do I not have to worry about the pitch it in this case?

Edited by CirdanValen

Share this post


Link to post
Share on other sites

Using a copy from a VkBuffer instead of from a linear texture is definitely the right track. Trying to use a linear texture for staging is an exercise in frustration because you run into the irritating limitations that apply to linear textures (like limited format/mipmap support).

You shouldn't have to worry about implementation specifics though. Section 18.4 contains pseudocode describing how it expects your texture data to be laid out in your VkBuffer prior to a call to vkCmdCopyBufferToImage.

Assuming you're not using a block compressed format, the spec says:

rowLength = region->bufferRowLength;
if (rowLength == 0)
    rowLength = region->imageExtent.width;

imageHeight = region->bufferImageHeight;
if (imageHeight == 0)
    imageHeight = region->imageExtent.height;

elementSize = <element size of the format of the src/dstImage>;

address of (x,y,z) = region->bufferOffset + (((z * imageHeight) + y) * rowLength + x) * elementSize;

where x,y,z range from (0,0,0) to region->imageExtent.{width,height,depth}.

There is no need to worry about implementation specifics for staging texture data, that's the driver's problem/responsibility.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628388
    • Total Posts
      2982403
  • Similar Content

    • By dwatt
      I am trying to get vulkan on android working and have run into a big issue. I don't see any validation layers available. I have tried linking their libraries into mine but still no layers. I have tried compiling it into a library of my own but the headers for it are all over the place. Unfortunately , google's examples and tutorials are out of date and don't work for me. Any idea what I have to do to get those layers to work?
    • By mark_braga
      It seems like nobody really knows what is the correct behavior after window minimizes in Vulkan.
      I have looked at most of the examples (Sascha Willems, GPUOpen,...) and all of them crash after the window minimize event with the error VK_ERROR_OUT_OF_DATE either with an assertion during acquire image or after calling present. This is because we have to recreate the swap chain.
      I tried this but then Vulkan expects you to provide a swap chain with extents { 0, 0, 0, 0 }, but now if you try to set the viewport or create new image views with extents { 0, 0, 0, 0 }, Vulkan expects you to provide non-zero values. So now I am confused.
      Should we just do nothing after a window minimize event? No rendering, update, ...?
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By GuyWithBeard
      Hi,
      In Vulkan you have render passes where you specify which attachments to render to and which to read from, and subpasses within the render pass which can depend on each other. If one subpass needs to finish before another can begin you specify that with a subpass dependency.
      In my engine I don't currently use subpasses as the concept of the "render pass" translates roughly to setting a render target and clearing it followed by a number of draw calls in DirectX, while there isn't really any good way to model subpasses in DX. Because of this, in Vulkan, my frame mostly consists of a number of render passes each with one subpass.
      My question is, do I have to specify dependencies between the render passes or is that needed only if you have multiple subpasses?
      In the Vulkan Programming Guide, chapter 13 it says: "In the example renderpass we set up in Chapter 7, we used a single subpass with no dependencies and a single set of outputs.”, which suggests that you only need dependencies between subpasses, not between render passes. However, the (excellent) tutorials at vulkan-tutorial.com have you creating a subpass dependency to "external subpasses" in the chapter on "Rendering and presentation", under "Subpass dependencies": https://vulkan-tutorial.com/Drawing_a_triangle/Drawing/Rendering_and_presentation even if they are using only one render pass with a single subpass.
      So, in short; If I have render pass A, with a single subpass, rendering to an attachment and render pass B, also with a single subpass, rendering to that same attachment, do I have to specify subpass dependencies between the two subpasses of the render passes, in order to make render pass A finish before B can begin, or are they handled implicitly by the fact that they belong to different render passes?
      Thanks!
    • By mark_braga
      I am looking at the SaschaWillems subpass example for getting some insight into subpass depdendencies but its hard to understand whats going on without any comments. Also there is not a lot of documentation on subpass dependencies overall.
      Looking at the code, I can see that user specifies the src subpass, dst subpass and src state, dst state. But there is no mention of which resource the dependency is on. Is a subpass dependency like a pipeline barrier. If yes, how does it issue the barrier? Is the pipeline barrier issued on all attachments in the subpass with the input src and dst access flags? Any explanation will really clear a lot of doubts on subpass dependencies.
      Thank you
  • Popular Now