When do we need multiple FBOs?

Started by
6 comments, last by Psychopathetica 4 years, 12 months ago

I know that framebuffer objects can attach multiple render targets for rendering at once, but when do we need multiple fbos?

Advertisement

Is this a question about OpenGL framebuffer objects? I only ask because you tagged the topic with "DX12". ?

1 hour ago, MJP said:

Is this a question about OpenGL framebuffer objects? I only ask because you tagged the topic with "DX12". ?

I think OpenGL also has this issue. I tagged "DX12" because I'm using some wrapper around it and it has "FBO" classes.

Take the "ping-pong" technique for example, why do we need two fbos, instead of just one fbo with two render targets? I cannot imagine any circumstances where we should have multiple fbos, since we have multiple color attachments to a single fbo...

A bit like MJP, your title and end of your question is about multiple FBOs but you are talking about multiple render targets.

To answer a bit generally, you'll need several FBOs when you'll need to render from different view (ie for shadow-mapping), or if you need to render a scene with different output (ie for SSAO).

You'll need MRT (multiple render targets) when you'll need to draw to different output in a single render pass. Deferred rendering does so in order to store position, normals, colors... in a single pass.

Your terminology is definitely not D3D here. Attachments and FBOs aren't D3D concepts. I don't think I can really help accurately if I'm not sure what you're talking about.

Sounds like an FBO is a wrapper around all the render targets that are bound for a given draw call? I.e. an object representing MRT rendering? In which case _Silence_'s answer is correct. In your example of ping-pong rendering, you don't want to write to both render targets at the same time, you want to write to one, then read from it and write to other.

Are you talking about the "frame buffers" technique described in the "Hello Frame buffering" sample within the microsoft dx12 Hello world sample ? https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/Samples/Desktop/D3D12HelloWorld

The same technique is also used in many later microsoft samples.

A better example is the multithreaded sample, where they use a Frameobject class https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/Samples/Desktop/D3D12Multithreading

 

The idea is you have 2 (or 3) frame objects that you rotate between, each one with its own render target, commandlists, and also its own cbchangeseveryframe cbuffer.

A simple reason to do it is because commandlists can take a few frames from the time you called executecommandlists to the time they have been completed on the gpu and you want to feed it the next frames commandlist before the previous ones fence indicates it has finished.

It also helps you make sure the gpu is getting the correct frames contents of cbuffers that you change every frame. If you had used the same cbchangeseveryframe cbuffer amoung all frames then you may end up updating it from the cpu before the gpu has finished doing a commandlist that is using it.

 

This is what D3D12HelloFrameBuffering.h says

"

    // In this sample we overload the meaning of FrameCount to mean both the maximum
    // number of frames that will be queued to the GPU at a time, as well as the number
    // of back buffers in the DXGI swap chain. For the majority of applications, this
    // is convenient and works well. However, there will be certain cases where an
    // application may want to queue up more frames than there are back buffers
    // available.
    // It should be noted that excessive buffering of frames dependent on user input
    // may result in noticeable latency in your app.

 


"

 

I tend to use multiple framebuffers myself. For example. If I do the Bloom effect for post processing, im gonna need a framebuffer for the original render, then another for the exposure tonemapping, then another for the bright filter to gather only the bright pixels, then another for blurring horizontal, then another to blur vertical, then another for the contrast boost, and finally combine the original framebuffer texture with the contrast boosted texture to be in another framebuffer to be drawn into a fullscreen polygon later.

Another example is if i am cubemapping for reflection. I need like 6 framebuffers for all 6 sides. But at the cost of rendering 6 times so to speed it up, I just shrink down the size of the framebuffer. There are lots of reasons to use multiple framebuffers out there depending on what you do.

This topic is closed to new replies.

Advertisement