Jump to content
  • Advertisement
Sign in to follow this  
Ryan_001

Vulkan Vulkan UI rendering

This topic is 562 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working on a UI that uses Vulkan for rendering.  I've come across a conundrum of sorts.  The general structure as it stands is:

- Window class handles interfacing with OS.  Each Window contains a number of Frame's (5-10'ish range, not a large number).

- Frame classes serve as containers for controls.  Frame's are mostly there to facilitate different rendering techniques (different shaders, descriptors sets, etc...), as well as differences in update frequency.

- Controls, the buttons, menus, etc... you actually see/work with, are contained in Frame's.

I had ideas for 3 different ways to approach this:

1) Have each Frame record a secondary command buffer.  To render the Window a primary commmand buffer would be created executing the secondary buffers within a render pass.  That way I'd only have to update the secondary command buffers when their corresponding Frame changes, the Frame secondary buffers would be VkFrameBuffer independent, and leads itself to easy multi-threading.  The downside is that the entire window would need to be rendered every frame.  For a video game this isn't a problem, but for apps/utilities this is unnecessary most of the time.

2) Store a 'dirty' rect/area.  When rendering simply create one large primary framebuffer that contains all necessary rendering.  This has the advantage of only rendering what actually changes, but means there would be almost no re-use of the command buffers.  Every change would almost always necessitate a completely new command buffer.

3) Have each Frame create its own primary command buffer.  This would make 'dirty' rect/area updates relatively easy, as well as allow command buffer reuse for some situations.  The downside is there would still be much less command buffer reuse than option 1, and rendering would not occur within a single render pass.  The docs aren't entirely clear, and since each GPU is different, its hard to gauge how much slower splitting the rendering across 5-10 separate command buffers/render passes would be than having one large one.

How would you guys go about this?  What method is better and/or what are you guys using?

Share this post


Link to post
Share on other sites
Advertisement

I question the need to only render a small portion of the UI at a time. Maybe it was useful back in the day before we had hardware acceleration and blitting pixels in the CPU was slow, but if you're using hardware accelerated rendering...I don't really think it's a problem redrawing the whole UI when something changes.

I did a small immediate mode UI for a basic map editor in opengl. I filled an array of all the vertices, copied them to the VBO, and rendered everything....every frame (and this was split up into multiple draw calls if the scissor rect changed). There was very little performance hit (everything, including the map itself, was being push out in <1ms iirc). In a UI that isn't limted to real time framerate of a video game..it's even less of a problem since acceptable delay is much more lenient. So if it takes 5 or even 10 ms to render your whole UI, it wouldn't cause noticeable lag in your application's usage.  

Edited by CirdanValen

Share this post


Link to post
Share on other sites

On desktop GPU's, you obviously have a lot of power at your disposal, also if your UI has transparent portions you are of course going to have to re-draw the entire thing.  For mobile though, wouldn't it make sense to try to cut back updates?

I guess you have a CPU/GPU tradeoff.  If I re-create the command buffer each frame, I put more stress on the CPU but can get optimal GPU usage.  If I cache the command buffers I can reduce stress on the CPU, but potentially increase GPU stress.  At this point I'm leaning towards recreating the command buffer each frame (#2 above).

Share this post


Link to post
Share on other sites
I don't have a truly "scientific, fact-based" answer for you, but my approach is "just draw it all", and I believe that is a good one. Why? For one reason it's simple. Simple is good. I'm stupid, and the simpler it is the fewer mistakes I make, the fewer time I spend tearing my hair. And then, the single one dominating thing on a GPU which never seems to get significantly better over the years is ROP and since you are (presumably) not writing to the screen buffer directly but use double-buffering (I wouldn't even know how to do something different with Vulkan anyway, but maybe that is possible) this means that you have to write every pixel one way or the other. Which of course means you ROP every pixel, whether you burn extra GPU memory (which is not necessarily abundant on a mobile device) to save a few vertex shader cycles or not. That, and let's assume you could indeed only write the pixels that change. This would mean you can no longer pass the "undefined" and "don't care" flags to Vulkan (because you do care about the previous contents, and it must be well-defined!), which presumably means the driver has to keep data from the previous frame around longer and change write-and-forget operations to read-modify-write. Or something, whatever. In any case, my bet is it will not come for free.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!