Separated 2D and 3D rendering to 1 Window

Started by
2 comments, last by J.K 7 years, 5 months ago

Hello!

I was wondering if it is possible in OpenGL 4.5 to separate 3D and 2D rendering. What I want to reach is to use 2D to code some GUI for my 3D Editor in OpenGL, and use 3D for the Editor's Scene.

I don't know how is it made in Blender, but I heard somewhere that it's very popular method, even better than combining OpenGL with Qt or OpenGL with WinAPI.

How can I reach this ?

Advertisement

This is totally possible. Making your own UI will probably end up being a lot of work so don't dismiss any ready made methods just yet. It depends a lot on what you want but the basic idea would be to render your scene in 3d as usual and then setup a new projection matrix (that will render 2d to the screen) disable things like depth testing and just draw your ui. It is surprisingly simple to do. The tough part will come when you want more complex ui and interactive ui. Even something as simple sounding as text can be a lot of work.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

There are three general approaches you can take, each with advantages and drawbacks:

1) Draw all the 3D first, then set your camera and projection matrices to ortho and disable depth testing/writing and draw all your 2D. This is the simplest method and probably the most common.

2) Render your 2D and 3D to separate framebuffers, then blit them together during post-processing. This is only marginally more complex than the first option. It doubles your framebuffer memory usage (roughly - the 2D path probably doesn't need a depth buffer or the like, nor does it need HDR) but can allow for some more parallelization in some GPUs (e.g. the ones that can render the 2D and 3D portions simultaneously, esp. if your game is not making 100% usage of the GPU, which is typically the case for the vast majority of games).

3) Use a separate framebuffer for each general window or panel, much like the second option. This can reduce memory usage because you don't need space in the 2D framebuffer for the large swaths of the machine that have no 2D overlays, can allow more rendering parallelism, and enables you to more easily integrate 2D elements into your 3D scene (like the ever popular Dead Space holographic projections).

Technically speaking, these are all trivially doable in any good 2D library, since all you _should_ be doing is handing your 2D rendering code a framebuffer for each distinct UI panel/window and the framebuffer dimensions and screen offset (where appropriate). That means that you can hand it the main framebuffer (option 1), hand it a dedicated UI framebuffer (option 2), or hand it a different framebuffer for each distinct UI panel (option 3). For input, likewise, you can project your input through your camera to easily handle fancy uses of option 3.

Sean Middleditch – Game Systems Engineer – Join my team!

@SeanMiddleditch @Nanoha

Thanks for quick reponse! The third approach sounds perfect for me, so I will try to work on it.

This topic is closed to new replies.

Advertisement